BoE’s AI forum considers governance, transparency and standards

The Bank of England and Financial Conduct Authority launched the Artificial Intelligence Public-Private Forum to better understand the impact of artificial intelligence (AI) and machine learning (ML) on financial services. Here are some highlights from a recent meeting:

AI governance should be automated

Members agreed it is beneficial to automate the documentation generation or information/evidence collection aspects of the governance process. This allows the information to be embedded into the development lifecycle (and thereby the governance processes) rather than being additional activities to engage in at a later stage. This in turn helps with innovation and rapid development of AI systems because there is no need to engage in a long, slow waterfall-style approach.

AI governance should have a central authority

Members agreed there should be a centralized body within a firm that sets standards for AI governance on an ongoing basis. However, there was disagreement if this should be the responsibility of one senior manager, like a chief AI officer, or shared between several senior managers, such as chief data officer and head of Model Risk Management. Even if it was one senior manager, members did not agree on the need for a chief AI officer, since the governance standards may be wider and technology agnostic, for exampple, customer transparency standards need to be applied to all automated decisions, irrespective of the use of AI.

Transparency not so clear cut

Members acknowledged the importance of explainability yet pointed to the fact that there is little in terms of research or viable products that could address this issue. Members noted that AI explainability is not just about the AI model but also about communicating to consumers in an accessible, meaningful and comprehensive way. This was identified as an area where industry stands to gain a lot from best practice.

With respect to internal explainability, no particular technical standard has proved superior to another and solutions are very context-dependent.

Model risk management provides framework for AI standards

Members observed that, when it comes to regulation, it is how AI impacts decision-making that is of interest, not the technology per se. Also, there are areas where regulatory expectations are clear, as in data protection. The ICO (UK Information Commissioner’s Office) has published guidance in this area. Similarly, model risk management provides a framework through which AI issues can be approached and mitigated. What is lacking, however, is an overarching framework that brings these more disparate elements together and permits a more holistic understanding of the risks and benefits of the technology.

Source

Related Posts

Previous Post
SEC fines Fixed Income Clearing Corp $8 million for inadequate risk management from 2015-2018
Next Post
Central Bank of Nigeria launches first CBDC in Africa

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account