Industry commentary on BoE/UK FCA AI survey on AI adoption

The Bank of England (BoE) and UK Financial Conduct Authority (FCA) released a survey on how the financial services and capital markets are using artificial intelligence (AI).

Findings from the report include:

Use and adoption: 75% of firms are already using artificial intelligence (AI), with a further 10% planning to use AI over the next three years. This is higher than the figures in the 2022 joint Bank of England (Bank) and Financial Conduct Authority (FCA) Machine learning in UK financial services, of 58% and 14% respectively. Foundation models form 17% of all AI use cases supporting anecdotal evidence for the rapid adoption of this complex type of machine learning.

Third-party exposure: A third of all AI use cases are third-party implementations.  This is greater than the 17% we found in the 2022 survey and supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease. The top three third-party providers account for 73%, 44%, and 33% of all reported cloud, model, and data providers respectively.

Automated decision-making: Respondents report that 55% of all AI use cases have some degree of automated decision-making with 24% of those being semi-autonomous, i.e., while they can make a range of decisions on their own, they are designed to involve human oversight for critical or ambiguous decisions. Only 2% of use cases have fully autonomous decision-making.

Materiality: 62% of all AI use cases are rated low materiality by the firms that use them with 16% rated high materiality.

In addition:

  • 46% of respondent firms reported having only ‘partial understanding’ of the AI technologies they use versus 34% of firms that said they have ‘complete understanding’. This is largely due to the use of third-party models where respondent firms noted a lack of complete understanding compared to models developed internally.
  • The highest perceived current benefits are in data and analytical insights, anti-money laundering (AML) and combating fraud, and cybersecurity. The areas with the largest expected increase in benefits over the next three years are operational efficiency, productivity, and cost base. These findings are broadly in line with the findings from the 2022 survey.
  • Of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness.
  • The risks that are expected to increase the most over the next three years are third-party dependencies, model complexity, and embedded or ‘hidden’ models.
  • The increase in the average perceived benefit over the next three years (21%) is greater than the increase in the average perceived risk (9%).
  • Cybersecurity is rated as the highest perceived systemic risk both currently and in three years. The largest increase in systemic risk over that period is expected to be from critical third-party dependencies.
  • The largest perceived regulatory constraint to the use of AI is data protection and privacy followed by resilience, cybersecurity and third-party rules and the FCA’s Consumer Duty.
  • The largest perceived non-regulatory constraint is safety, security and robustness of AI models, followed by insufficient talent and access to skills.
  • 84% of firms reported having an accountable person for their AI framework. Firms use a combination of different governance frameworks, controls and/or processes specific to AI use cases – over half of firms reported having nine or more such governance components.
  • While 72% of firms said that their executive leadership were accountable for AI use cases, accountability is often split with most firms reporting three or more accountable persons or bodies.

One aspect that stood out is that firms appear to be rushing to adopt AI without having a solid data management strategy in place – of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness.

Marion Leslie, head Financial Information at SIX, said in emailed commentary: “In the latest Future of Finance survey report from SIX, there was agreement among asset managers, wealth managers, and investment banks that compliance and regulatory reporting represent the areas where AI can deliver the most client-value. This echoes the findings of the BoE report, which identifies operational efficiency as the area with the largest expected increase in benefits. With the global drive for increased transparency over the past decade resulting in more regulatory reporting requirements for all financial institutions, it is possible they are recognizing the potential that AI holds to support this. When fed with high-quality data to process, AI can provide an opportunity to automate large portions of the reporting process for both regulatory and client reports.”

Colin Clunie, head of EMEA operations at Clearwater Analytics, said in emailed commentary: “It is clear that the industry is heavily leaning towards AI adoption, with successful use cases demonstrating the power of AI in transforming operations and creating efficiencies by simplifying entire investment life cycles. For example, it can be used to rapidly produce risk exposures, to help investment managers understand how their portfolio is impacted by breaking events such as elections, or geopolitical disturbances. As clients seek quicker data delivery and greater accuracy, particularly during periods of market volatility, AI can be a panacea for meeting these needs. This isn’t to say there aren’t risks attached to any new technology, which is why human oversight is such a crucial factor in successfully applying AI, as it can control and verify results, detect anomalies, and provide a clear view of decision-making processes.”

Cédric Cajet, product director at NeoXam, said in emailed commentary: “As the findings suggest, firms are fast realizing the tremendous benefits of AI, which can be harnessed to expedite coding and software development tasks, review investment data, and even draft investment reports. And yet, that firms perceive data quality as one of the top five risks of this new technology is hardly surprising, as many financial institutions lack a central view of their underlying data. Firms must first ensure they can normalize, validate and consolidate the full breadth of data they interact with before rushing to integrate AI. There is little point in arming yourself with the very best AI weaponry only to see it back-fire.”

Related Posts

Previous Post
People moves: AnalytixInsight, Apollo, Cantor Fitzgerald, Euroclear, FDIC, Federal Reserve, FDIC, Liquidnet, NY Fed, SASLA, US SEC, WFE
Next Post
IOSCO consults on pre-hedging practices, including for RFQs

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account