BoE official reveals early results of financial AI adoption survey

Excerpts from speech by James Proudman, executive director of UK Deposit Takers Supervision at FCA Conference on Governance in Banking, June 4 2019

The art of managing technology is an increasingly important strategic issue facing boards, financial services companies included. And since it is a mantra amongst banking regulators that governance failings are the root cause of almost all prudential failures, this is also a topic of increased concern to prudential regulators.

We need to understand how the application of AI and ML within financial services is evolving, and how that affects the risks to firms’ safety and soundness. And in turn, we need to understand how those risks can best be mitigated through banks’ internal governance, and through systems and controls.

To gather more evidence, the Bank of England and the FCA sent a survey in March to more than 200 firms, including the most significant banks, building societies, insurance companies and financial market infrastructure firms in the UK. This is the first systematic survey of AI/ML adoption in financial services. The full results of the survey will be published by the Bank and FCA in Q3 2019, and are likely to prove insightful.

Early indicative results

Overall, the mood around AI implementation amongst firms regulated by the Bank of England is strategic but cautious. Four-fifths of the firms surveyed returned a response; many reported that they are currently in the process of building the infrastructure necessary for larger scale AI deployment, and 80% reported using ML applications in some form.

The median firm reported deploying six distinct such applications currently, and expected three further applications to go live over the next year, with ten more over the following three years. Consistent with industry surveys, barriers to AI deployment currently seem to be mostly internal to firms, rather than stemming from regulation. Some of the main reasons include: (i) legacy systems and unsuitable IT infrastructure; (ii) lack of access to sufficient data; and (iii) challenges integrating ML into existing business processes.

Large established firms seem to be most advanced in deployment. There is some reliance on external providers at various levels, ranging from providing infrastructure, the programming environment, up to specific solutions. Approaches to testing and explaining AI are being developed and, perhaps unsurprisingly, there is some heterogeneity in techniques and tools.

Firms said that ML applications are embedded in their existing risk frameworks. But many say that new approaches to model validation (which include AI explainability techniques) are needed in the future. Of the firms regulated by the Bank of England that responded to the survey, 57% reported that they are using AI applications in risk management and compliance areas, including anti-fraud and anti-money laundering applications.

In customer engagement, 39% of firms are using AI applications, 25% in sales and trading, 23% in investment banking, and 20% in non-life insurance. By and large, firms reported that, properly used, AI and ML would lower risks – most notably, for example, in anti-money laundering, KYC and retail credit risk assessment. But some firms acknowledged that, incorrectly used, AI and ML techniques could give rise to new, complex risk types – and that could imply new challenges for boards and management.

Three principles for governance

First, the observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modeled and tested; and whether the outcomes derived from the data are correct.

Second, the observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.

And third, the acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organization.

Read the full speech

Related Posts

Previous Post
ESRB: IT may play an important role in managing regulatory complexity
Next Post
LCH: Best Practices in CCP Risk Management

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account