43% of companies have no AI policy for legal: Expert warns SMBs are the most exposed
cloud computing technology concept transfer database to cloud. There is a large cloud icon that stands out in the center of the abstract world above the polygon with a dark blue background.
Nearly 70% of legal professionals already use AI for work, but many organizations still have no formal policy or training in place.
AI is already moving through legal workflows faster than governance can keep up. New industry research reveals that nearly 7 in 10 legal professionals now use general-purpose AI for work, while 43% say their organization has no formal AI policy and no plans to create one.
For small and mid-sized businesses, that gap can be even harder to manage. Larger legal departments are increasingly putting dedicated AI oversight in place, but many SMBs have no legal operations function, no approved workflows, and no clear guardrails for how contracts, legal requests, or sensitive documents should be handled.
“When companies do not have a clear policy for legal AI use, people will still find their own way to use these tools,” says Zilvinas Girenas, head of product at nexos.ai. “In large organizations, there is usually at least some legal ops or governance structure around that. In SMBs, the same work may fall to one person reviewing contracts with consumer AI and no audit trail, which is why smaller companies are often the most exposed.”
The gap in legal governance is widening
Many organizations are still asking legal teams to navigate AI without the structure to use it safely. According to the same research, 54% say their organization has provided no AI training and has no plans to do so, while only 9% report having a written and actively enforced AI policy.
That gap is especially concerning because legal teams rank data security as their top AI concern at 46%, followed by ethical issues at 42% and privilege at 39%.
Legal work also carries a different level of sensitivity than most business functions. When people use AI in legal workflows, they are not just drafting faster or summarizing information. They often handle contracts, commercial terms, internal investigations, and other material that may involve sensitive business information, privilege, or legally sensitive judgment.
In practice, the governance gap usually does not look dramatic at first. It looks like a contract manager dropping supplier terms into a public chatbot to speed up review, an operations lead asking AI to redraft an NDA, or a generalist employee using a free AI tool to summarize legal correspondence before forwarding it internally.
The issue is not only accuracy. A bigger concern is that, without a clear policy, the company often lacks specific rules about which tools are allowed, what data can be entered, who needs to check the results, and whether there is an audit trail.
This is where SMBs are often the most exposed. Larger legal organizations are increasingly building formal oversight into AI adoption, while smaller companies often have no legal operations layer, no dedicated legal technology owner, or no structured review process around AI use. In those environments, AI adoption still happens, but it happens one person at a time, through improvised workflows that are difficult for leadership to see and even harder to govern.
A basic AI policy changes that dynamic. It does not need to be long or overly technical, but it should define approved tools, ban the use of public AI systems for sensitive legal data, and make someone responsible for oversight. Legal guidance published this year warns that banning AI without offering approved alternatives often creates “shadow AI,” where employees use unapproved consumer tools anyway and the company loses visibility altogether.
“What many SMB leaders miss is that legal AI risk usually starts as a workflow problem before it becomes a compliance problem,” says Girėnas. “If there’s no clear policy, people will still use AI to speed up legal work, but they will do it in ways the company cannot monitor. That’s why smaller businesses are often the most exposed — not because they use more AI than enterprises, but because they usually have fewer controls when that use begins.”
How the risk shows up
That imbalance is becoming more visible across the legal sector. CLOC’s 2026 industry report proves that 85% of legal departments now have dedicated AI oversight or resources, while Thomson Reuters highlights that SMB legal teams are adopting GenAI to extend limited capacity.
That means the governance gap inside an SMB usually does not look like a major technology rollout. It looks like an operations lead pasting supplier terms into a public AI tool or a finance lead summarizing legal language because there is no in-house counsel or approved system available at the moment.
The risk is not only that the output may be wrong. The deeper problem is that many companies still have no policy defining which tools are approved, what data can be shared, or how that use is documented.
“Once legal work starts moving through unapproved AI tools, sensitive information can leave a company’s normal controls without anyone noticing,” says Girėnas. “That is why confidentiality, privilege, and data security are still the core issues in legal AI. If employees rely on consumer tools outside governed environments, the company may not know what was shared, where it went, or how it can be protected.”
For SMBs, this is why exposure builds so quickly. A company may not think it has a legal AI strategy at all, but if one person is already using AI to review contracts, summarize legal emails, or draft clauses, then a legal AI workflow already exists, just without visibility, policy, or auditability.
“The risk for SMBs is not reckless use of AI, but invisible workflow change,” says Girėnas. “Legal teams adopt useful tools to solve immediate problems, and that makes perfect sense. But if those tools get embedded before the company has defined approved use, data boundaries, and review steps, efficiency arrives faster than governance.”