Financial services · AI Readiness

AI readiness assessment for UK financial services firms

FCA expectations on AI governance and model risk are running ahead of most firms' frameworks. This is the four-minute readiness check that maps where you sit.

Of all the regulated sectors, financial services has the deepest existing supervisory framework for AI to be plugged into — and the highest expectations of how firms govern technology that materially affects customer outcomes. The FCA's published expectations through 2025 and into 2026 have made the direction unambiguous: AI is supervised technology, governed under existing senior-manager regimes, expected to fit cleanly inside firms' model risk and operational resilience frameworks.

The challenge most firms face is not the absence of governance. It is that the existing governance was written for a previous era of technology and has not been updated to take account of AI's specific characteristics.

The FCA and PRA expectation backdrop

Three regulatory threads shape AI use in UK financial services:

Model risk management. SS1/23 (PRA) on model risk principles for banks, the FCA's mirroring guidance for non-bank firms, and the broader expectation that AI/ML models sit inside the firm's model risk framework — including validation, monitoring, and challenge.

Third-party risk. SS2/21 and the wider third-party risk framework. Most firms' AI capability is delivered through third parties (Microsoft, Google, OpenAI, Anthropic, specialist vendors). Each is a third-party arrangement that needs the same diligence as any other critical outsourced service.

SMCR. Senior managers carry personal accountability for the technology decisions made inside their function. AI usage is increasingly read into existing SMF responsibilities — not as a new SMF, but as an extension of existing accountabilities.

Where AI is creating new SYSC issues right now

The Senior Management Arrangements, Systems and Controls (SYSC) sourcebook covers the firm's organisation, systems, controls, governance, record-keeping, and outsourcing arrangements. AI use creates new SYSC-relevant questions in each of these areas:

  • Organisation: who is accountable for AI inside the firm, and is that accountability formally documented?
  • Systems and controls: what controls operate around AI usage — approval, review, audit, exception, withdrawal?
  • Governance: what board reporting cadence covers AI usage, AI risk, and AI outcomes?
  • Record-keeping: for any AI involvement in a customer-affecting decision, what records exist and how long are they retained?
  • Outsourcing: are AI vendors mapped, classified, and overseen as third parties under your existing framework?

Most firms can answer some of these questions but not all five. The readiness assessment surfaces which ones.

Five readiness dimensions weighted for financial services

The Arx Certa scorecard weights its five dimensions for sector context. For financial services, the weighting is heaviest on Governance, Security, and Use case.

Governance. A formal AI policy, board-approved, with named senior-manager ownership; a documented approval process for new AI use cases; integration with existing model risk and third-party risk frameworks; an audit-ready evidence pack producible in seven days.

Data. Customer data classification and AI access boundaries; data residency controls; lineage and retention; the specific question of whether AI vendor agreements are written under English law and whether customer data crosses the Atlantic during processing.

Infrastructure. Production-grade hosting (no consumer-tier services in scope), SSO and conditional access, network segmentation; the operational-resilience question of "if this AI tool stops working for a week, what breaks?".

Security. The financial services security baseline applied to the AI vendor layer — MFA, RBAC, logging, monitoring, vendor SOC2/ISO27001 evidence, contractual right-to-audit clauses.

Use case. Each AI use case mapped to the customer outcome it affects, the conduct rules it touches, and the senior manager whose responsibilities cover it. Use cases that touch financial promotion, advice, customer service or vulnerable-customer interactions carry their own additional governance layers.

The AI vendor due diligence pattern most firms haven't caught up to

Five years ago, firms had a third-party risk framework that did the work for them. Onboarding a SaaS vendor went through procurement, legal, IT, infosec, and operational resilience. The framework worked because the universe of in-scope vendors was bounded and slow-moving.

AI broke that bound. A firm can now have AI capability delivered through fifteen different layers — the underlying model provider, the orchestration vendor, the embedded AI features inside their existing CRM, their existing ERP, their existing email platform, the helpdesk tooling, the documentation system, the analytics platform — each of which is independently a third party with independent data handling.

The firms that have caught up have rebuilt their third-party register with an AI lens. The firms that haven't are running an outdated map of where their data actually goes.

What "ready" looks like

For an FCA-supervised firm, AI readiness ends at a state where the firm can answer four questions in a supervisory visit without scrambling:

  1. Who is accountable for AI inside your firm? (Named senior manager.)
  2. What AI tools are you using and what governs their use? (Living register plus AI policy.)
  3. What records have you retained of AI involvement in customer outcomes over the last 12 months? (Audit pack.)
  4. How does your AI risk fit inside your existing risk taxonomy? (Risk register entries with controls and residual risk.)

The Arx Certa scorecard is the four-minute version of those four questions, plus eight others.

Test your firm's AI readiness against FCA-fit expectations

Twelve questions across the five dimensions, weighted for financial services context. Personalised report identifies the gaps most likely to surface in supervisory visits or third-party risk reviews.

Get your AI readiness score → 4 minutes · 12 questions · Personalised report