Of all the professional services sectors, accountancy has the most asymmetric relationship with AI. The work itself — reviewing workpapers, drafting management letters, reconciling unusual entries, summarising findings — is exactly what current AI tooling is best at. And the data the work runs on — client financials, payroll, sensitive personal information, tax positions, going-concern judgments — is exactly the data that public AI tooling cannot defensibly process.
The result is a sector full of firms running pilots that, when looked at carefully, are leaking client data into vendor logs that no one in the partnership has read the terms of service for.
The accountancy-specific AI risk profile
Three categories of data make accountancy AI use harder than most:
Client financials. Trial balances, ledgers, management accounts, statutory accounts in draft. Any AI tool processing these is touching the most commercially sensitive data the client holds.
Payroll and personnel data. National Insurance numbers, salary, benefits, dependants, sometimes health-related deductions. UK GDPR Article 9 territory.
Tax data. Identifiable, sensitive, and held under specific HMRC obligations on data handling. Mishandling tax data is a different category of regulatory exposure from mishandling generic business data.
"Don't paste it into ChatGPT" is the cocktail-party answer. The boardroom answer is harder: which categories of data, in which AI tools, under which contractual basis, with which logging, with which retention behaviour, can your firm defensibly use? Most firms have not had that conversation explicitly. Their staff are using AI tools daily. The conversation is overdue.
What the institutes are starting to ask for
The ICAEW, ACCA, and ICAS have not yet published prescriptive AI rules for member firms — but their guidance has moved fast through 2025 and into 2026. The signals are consistent: firms should be able to demonstrate that AI use sits inside a formal governance framework, that client confidentiality obligations are not breached by AI vendor data handling, and that decisions materially affecting clients are reviewed by a qualified human.
Practically: at your next member-firm review, expect questions about your AI policy, your approved tools list, your training records, and the audit log behind it. The firms that have run a readiness assessment in advance answer those questions in five minutes. The firms that haven't, don't.
The five readiness dimensions applied to an accountancy practice
Governance. An accountancy practice needs an AI usage policy that explicitly addresses the three sensitive data categories above, with worked examples (Tier 0 public data: a checklist drawn from public source. Tier 3 restricted: a P11D query). Generic policies fail audit because they do not survive contact with an accountant's actual workflow.
Data. Where does client data live? Most practice management systems were not designed with AI access in mind. Connecting AI tooling to them generally introduces a new integration layer with its own access controls. The first question is rarely "can it answer the question?". It is "can it access the data without breaking the security perimeter that already exists?".
Infrastructure. Most firms use Microsoft 365 or Google Workspace plus a specialist practice management vendor. Copilot and Workspace AI both bring AI capability into existing licenses. The question is whether the underlying file-sharing permissions in those tenants — built up over years of "share with the team for this matter" — are appropriate for an AI assistant that respects those permissions exactly.
Security. MFA, SSO, role-based access, log retention, the ability to produce an audit trail. The same things the firm already does for cyber insurance — applied to the AI vendor layer.
Use case. Workpaper review. Anomaly flagging. Management letter drafting. Engagement letter generation. Practice update communications. Each is a real opportunity. Each requires the four prior dimensions to be in place before it can be operated safely.
Practical AI use cases that work today
Three accountancy AI use cases are widely deployed at scale by mid-sized UK practices in 2026:
Workpaper review assistant. AI reads completed workpapers and flags unusual entries, missing tests, and reference-trail gaps before partner review. Saves partner time on routine review; never replaces the partner's sign-off.
Anomaly detection on transactions. AI examines client transaction data against an engagement-specific profile (size, frequency, counterparty patterns) and surfaces statistically unusual entries for closer examination. Works particularly well at year-end on companies with month-end close discipline.
Drafting management letters. AI drafts the first version of a recurring deliverable — management letter, planning memo, tax-position summary — using the firm's standard sections and the matter's specific facts. The partner edits rather than starts from scratch.
In every case, the AI is the second-best person in the room. The qualified accountant remains the decision-maker.
Where firms typically fail their first AI audit
The most common failure point is not the AI tool itself. It is the absence of evidence that the AI tool's use is governed.
"Yes we use Copilot" — okay, show us the DPIA. The training records. The audit log of who used it on which matter last month. The vendor data processing agreement. The retention setting. The exception process for client data that should not be Copilot-accessible.
None of those are hard to produce. All of them are awkward to produce in a hurry. The readiness assessment is the cheapest way to find out what you already have and what you don't.
Test your firm's AI readiness in 4 minutes
Twelve plain-English questions across five dimensions weighted for accountancy practice. Personalised report covers the governance and data gaps most likely to surface at your next member-firm review.
Get your AI readiness score →