Firms think UK FCA’s AI Live Testing could help overcome “PoC paralysis”

Earlier this year, the UK’s Financial Conduct Authority (FCA) published a proposal for AI Live Testing that aimed to help firms use artificial intelligence (AI) safely and responsibly with positive outcomes for UK consumers and markets.

Among the 67 responses were 15 regulated firms that included high street and challenger banks as well as insurance firms, wealth managers, investment platforms, payment processing firms and credit reporting firms.

Respondents said that AI Live Testing was a” constructive and timely step toward increasing transparency, trust and accountability in the use of AI systems” and highlighted several key benefits and opportunities:

  • Real-world insights: Live production testing was seen as a valuable mechanism for understanding how AI models perform under real-world conditions, including system integration, data variability, output quality and user experience.
  • Overcoming Proof of Concept (PoC) paralysis: Many firms reported that AI PoCs often demonstrate technical merit but fail to progress due to concerns such as regulatory uncertainty and skills shortages. AI Live Testing was seen as a potential solution to this ‘last mile’ challenge.
  • From principles to practice: Respondents noted a lack of guidance on how to operationalize and measure key AI principles such as fairness, robustness, safety and security. AI Live Testing could help bridge this gap by providing a structured, repeatable process for assessing the performance of AI systems under real world conditions.
  • Creating trust: Respondents emphasized that traditional assurance methods are insufficient in the face of rapidly evolving AI capabilities and that trust in AI must be intentionally designed and transparently demonstrated. AI Live Testing offers an opportunity to redefine AI assurance, not as static documentation, but as observable system behavior under stress.
  • Addressing first-mover reluctance: Some firms hesitate to use AI in sensitive areas without greater regulatory clarity. Respondents see AI Live Testing as a key enabler to overcome first-mover concerns and unlock responsible innovation. Other respondents said it could be a way for the market to distinguish between responsible AI use and higher-risk applications, encouraging a culture of safe experimentation.
  • Regulatory comfort: Being given ‘regulatory comfort’, potentially through individual guidance or other tools, can substantially de-risk innovative AI use and encourage firms to bring beneficial products and services to market more quickly.
  • Collaboration: Respondents noted that AI Live Testing is a welcome step forward between the regulator and the industry to jointly navigate the challenges. At the same time, they highlighted the importance of a shared, clear roadmap of the overall objectives for AI Live Testing and that engagement with the FCA and AI Live Testing should be carried out separately from FCA supervisory engagement.
  • Model metrics: AI Live Testing can foster collaboration and help develop a shared technical understanding between firms and the FCA on complex AI issues such as model validation, bias detection and mitigation and ensuring robustness. Respondents felt this dialogue is crucial for navigating uncharted territory.

The FCA will start working with participating firms in the first cohort this October.

Read the full feedback summary

Related Posts

Previous Post
UK FCA considers reporting rules for firms lending crypto assets
Next Post
Buenos Aires Times: Argentina central bank cuts repo rate; US talks $20bn swap line

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account