If you sit on a law firm management committee in 2026, you have already had the AI conversation. Probably more than once. Often it starts with one of the partners showing the others what they got ChatGPT to draft last week, and ends with someone asking, quietly: "should we be doing that?"
The answer that doesn't fit on a board paper is: it depends on what your firm has put in place underneath. AI inside a law firm is not a technology question. It is a client confidentiality, professional indemnity, and SRA-supervision question wearing a technology hat.
Why law firms can't treat AI like other industries
Three things make legal AI a category of its own. First, every interaction with client data is governed by professional confidentiality obligations that pre-date GDPR by centuries and are enforced by a regulator that does not love novelty. Second, professional indemnity policies are increasingly explicit about AI usage — and increasingly silent about coverage when policies are not in place. Third, the buyer of legal services has started asking the question directly: "what is your firm's AI policy?" It is now a procurement field on enterprise client onboarding forms.
The combination means law firms cannot do what most other businesses do, which is to let AI adoption diffuse through the organisation while governance catches up later. The risk of catching up later is not "we get embarrassed at a board meeting". It is "an SRA inquiry asks for our audit log of partner-level AI usage and we cannot produce one".
The five gaps we see most often in legal AI adoption
Across the assessments Arx Certa has run with UK firms, five patterns turn up repeatedly:
- Matter data flowing into public AI tools. Partners using their personal ChatGPT account to draft a clause, a memo, a letter, an attendance note. The matter information is now in OpenAI's logs, governed by OpenAI's data agreements, not the firm's.
- No DPIA for the AI tools the firm has formally approved. Even where a firm has chosen its AI vendor properly, the data protection impact assessment is missing — or done as a one-off and not refreshed when scope expanded.
- Partner-level shadow IT. Senior fee-earners have more autonomy than they did when the first version of an acceptable-use policy was written. AI tools sit at the same intersection of "useful" and "unmanaged" that BYOD did fifteen years ago.
- Missing audit logging. The firm has approved one AI tool. The firm has no way to produce, on seven days' notice, a list of who used it on which matter when. This is the audit question that the SRA is starting to ask in supervision visits.
- Document data residency. The matter data sits in a UK-hosted document management system. The AI assistant sits on top of it, indexing into a US-hosted vector database that nobody asked about during the original procurement.
None of these are theoretical. Each one shows up in real firms with real management committees who thought they had this covered.
The five readiness dimensions in plain English for legal context
The Arx Certa AI Readiness Scorecard assesses five dimensions. For a law firm, each one carries a specific legal-sector weight.
Governance. Does the firm have an AI usage policy approved by the management committee, with a named owner, a review cadence, and a process for approving new tools before partners adopt them? In a regulated environment, "we tell people not to use ChatGPT" is not governance.
Data. Where does matter data live, who has access to it via AI tooling, and what would the audit trail of that access look like? This is where the document management system, the vector database underneath the AI assistant, and the lateral movement of confidential information all converge.
Infrastructure. Can your firm actually deploy AI tooling that meets the contractual and regulatory requirements — UK data residency, SSO, RBAC, isolated tenants? Or is the infrastructure question the reason every conversation with the AI vendor ends with "we'll have to take that one offline"?
Security. MFA on every account, role-based access, audit logs, the ability to immediately revoke an AI tool's access when a partner leaves, when a matter closes, when a client withdraws consent. Legal sector security is asymmetric — one breach is more material than a year of clean operations.
Use case. Has the firm chosen the 2–3 use cases AI will be deployed against this year, with a named owner and a measurable outcome, or is the AI conversation still "we should be doing something with AI"? Most firms are in the second category. The ones that pull ahead get into the first.
A real example: a mid-sized commercial firm, before and after
One Arx Certa client — a 40-partner commercial firm in the north of England — sat at a scorecard band of "Early" when the engagement started. They had bought a Copilot license cohort. They had two partners using ChatGPT Enterprise. They had no AI policy. The DPIA for Copilot had been started, not finished. The conversations on the management committee oscillated between "we need to move faster" and "we need to be careful". They were both right and neither was actionable.
After a six-week engagement, the same firm sat at "Operational". The AI policy was approved and acknowledged by every fee-earner. Copilot was rolled out to defined matter types only, with the SharePoint over-permissions remediated first. ChatGPT Enterprise was provisioned for the whole partnership with documented data agreements. The audit log answered the SRA question in the affirmative. The scorecard was used to set a quarterly review cadence the firm could maintain.
The difference between "Early" and "Operational" was not money. It was sequence. The firm did the governance and data work first, and the technology work second.
What "ready" actually means for a law firm
By the time a firm reaches "Mature" on the scorecard, three things are true: every AI-touching workflow has a named owner; the partnership can produce, on demand, an audit pack covering the last 12 months of AI use; and AI vendor due diligence sits in the same procurement workflow as outsourced cashier services and IT support. The firm is not "doing AI". The firm is operating AI inside the same governance perimeter it operates every other regulated business activity.
For some firms — usually the larger ones — that level of maturity is already in reach. For others, the path runs through a year of sequenced foundations work. The scorecard tells you, in four minutes, which path you are on.
Find out exactly where your firm stands
The Arx Certa AI Readiness Scorecard takes 4 minutes and produces a personalised report you can take straight to your management committee. Twelve questions, five dimensions, one quantified score.
Get your AI readiness score →