Inside the Thatcher Room at Portcullis House — where the Treasury Select Committee has a habit of convening hearings that generate considerably more friction than the architecture suggests — the message delivered in January 2026 was unusually direct. The Bank of England, the Financial Conduct Authority, and HM Treasury were told, in the language of a published parliamentary report, that their current approach to artificial intelligence in financial services was exposing consumers and the wider system to “potentially serious harm.” Dame Meg Hillier, the committee’s chair, said she was not confident the financial system was prepared for a major AI-related incident. That was not a hypothetical concern dressed up as political theater. It was a finding backed by evidence, and the evidence concerned what is already happening inside the premium-pricing models of British insurance firms.
The numbers are striking. More than 75 percent of UK financial services firms now use AI in some form, with the highest adoption rates among insurers and international banks. AI is already deciding, without human review in many cases, who gets a loan and on what terms, how much a health or car policy costs, and how quickly a claim will be processed. These are not experimental deployments. They are core operational systems, running at scale, touching millions of consumers, and governed by regulatory frameworks that were designed before the technology existed. The Treasury Committee’s central complaint is not that AI is being used — it’s that the rules shaping its use were written for a different kind of risk entirely.
| Category | Details |
|---|---|
| Parliamentary Body | Treasury Select Committee — report published January 22, 2026; chaired by Dame Meg Hillier |
| Core Criticism | Bank of England, FCA, and HM Treasury criticised for “wait-and-see” approach to AI in financial services |
| Scale of AI Adoption | Over 75% of UK financial services firms now use AI — insurers represent the highest uptake alongside international banks |
| Key Consumer Risk | “Proxy discrimination” — AI using data strongly correlated with protected characteristics (race, disability, income) to set premiums; risks entrenching a “poverty premium” |
| MPs’ Demands | AI-specific stress testing; FCA to publish practical Consumer Duty guidance on AI accountability by end of 2026; designation of major AI providers as critical third parties |
| Current Regulatory Framework | No UK AI-specific financial legislation; FCA relies on Consumer Duty, Senior Managers Regime — frameworks MPs say were never designed for AI |
| FCA Response (2026) | Launched “Mills Review” in January 2026; running AI Live Testing and Supercharged Sandbox — described by MPs as voluntary and limited in scope |
| Broader Context | Citizens Advice and consumer groups warning of algorithmic financial exclusion; parallel opposition to AI use in NHS private finance initiatives |
The specific concern around health premiums in private finance schemes is the part of this that carries the most immediate political weight. Consumer groups, including Citizens Advice, have warned MPs that AI-driven pricing models risk entrenching what they call a “poverty premium” — the troubling pattern in which the people least able to afford financial products end up paying the most for them. The mechanism, as experts have explained to the committee, is not straightforward discrimination of the kind existing laws can easily catch. It operates through proxy data.
An algorithm doesn’t ask whether a customer is disabled, from a particular ethnic background, or living in a deprived postcode. It simply processes dozens of correlated variables — spending patterns, browser behavior, postcode, device type — and produces a price. The correlation between those variables and protected characteristics under the Equality Act is the problem, and it is a problem the models themselves are not required to explain or disclose.
There’s a sense, sitting with this issue, that the regulatory gap has been widening for long enough that it no longer looks like an oversight. The UK government has consistently favored a light-touch approach to AI governance, preferring to let existing frameworks — the Consumer Duty, the Senior Managers and Certification Regime — stretch to cover new technology rather than legislating specifically. The FCA has countered criticism by pointing to initiatives like its AI Live Testing service and a “Supercharged Sandbox” designed to let firms experiment in controlled environments. MPs acknowledged those efforts and described them as inadequate. Participation is voluntary. Coverage is limited. And the firms most likely to push the boundaries of what the existing rules permit are not the ones signing up for experimental oversight programs.

The January report called for several specific changes: mandatory AI stress testing to assess how systems would perform under crisis conditions; practical guidance from the FCA on how Consumer Duty rules apply to AI-driven decisions, to be published before the end of 2026; and the formal designation of major AI providers as critical third parties, which would bring companies like the large cloud and model providers inside the Bank of England’s operational resilience framework. That last proposal has been sitting dormant since the Critical Third Parties Regime was established more than a year ago, without a single organization yet designated under it. The gap between the regime’s existence and its application is itself a version of the committee’s broader complaint.
The parallel debate over AI in NHS private finance schemes adds a different layer of political complexity. A group of MPs has been pushing back against the use of private finance initiatives to fund new NHS facilities — a longstanding controversy that predates AI entirely — and their objections now extend to the use of algorithmic tools in structuring and pricing those arrangements. The concern is that AI-optimized private finance deals, like AI-optimized insurance premiums, will systematically favor outcomes that look efficient on a spreadsheet while distributing costs in ways that are opaque, unequal, and difficult to challenge through conventional accountability mechanisms.
It is still unclear whether the government will move toward AI-specific legislation for financial services this year, or whether the pressure from the Treasury Committee will produce the more targeted guidance the FCA has been asked to deliver. What’s less unclear is that the political calculus has shifted. The “wait-and-see” defense, which worked well enough when AI adoption was a story about experimental pilots and early movers, is considerably harder to sustain when three-quarters of the sector is already running on the technology and the people bearing the most risk from its decisions are the ones who can least afford the consequences.