The Bank of England (BoE) and the Financial Conduct Authority (FCA) published industry responses to a discussion paper on artificial intelligence (AI) and machine learning (ML).
The key points made by respondents were:
- A regulatory definition of AI would not be useful. Many respondents pointed to the use of alternative, principles-based or risk-based approaches to the definition of AI with a focus on specific characteristics of AI or risks posed or amplified by AI.
- As with other evolving technologies, AI capabilities change rapidly. Regulators could respond by designing and maintaining ‘live’ regulatory guidance i.e., periodically updated guidance and examples of best practice.
- Ongoing industry engagement is important. Initiatives such as the AI Public Private Forum (AIPPF) have been useful and could serve as templates for ongoing public-private engagement.
- Respondents considered that the regulatory landscape is complex and fragmented with respect to AI. More coordination and alignment between regulators, domestic and international, would therefore be helpful.
- Most respondents said that data regulation, in particular, is fragmented, and that more regulatory alignment would be useful in addressing data risks, especially those related to fairness, bias, and management of protected characteristics.
- A key focus of regulation and supervision should be on consumer outcomes, especially with respect to ensuring fairness and other ethical dimensions.
- Increasing use of third-party models and data is a concern and an area where more regulatory guidance would be helpful. Respondents also noted the relevance of DP3/22 – Operational resilience: Critical third parties to the UK financial sector.
- AI systems can be complex and involve many areas across the firm. Therefore, a joined-up approach across business units and functions could be helpful to mitigate AI risks. In particular, closer collaboration between data management and model risk management teams would be beneficial.
- While the principles proposed in CP6/22 – Model risk management principles for banks (which have since been published by the PRA as SS1/23 – Model risk management principles for banks) were considered by respondents to be sufficient to cover AI model risk, there are areas which could be strengthened or clarified to address issues particularly relevant to models with AI characteristics.
- Respondents said that existing firm governance structures (and regulatory frameworks such as the Senior Managers and Certification Regime (SM&CR)) are sufficient to address AI risks.