BoE research questions role of AI systems in risk management and supervision

The increasing use of AI is reshaping the financial system. The implications of the manifold AI applications for financial stability depend on how they are implemented and regulated. An AI application is not just an algorithmic model that can be considered detached of its designers and those parties it interacts with. Rather, it should be analyzed holistically.

In this study, researchers from the Bank of England have discussed how partnerships based on the complementary strengths of human and artificial intelligence can reduce financial stability risks for several applications of AI in the financial markets, regulation and policy making.

They provide evidence that AI agents have the ability to take over relevant tasks in the market — often performing better than humans. However, ability implies a high degree of dependability and there are examples where AI fails to deliver that including unpredictable behaviour when the environment is changing and the fact that some AI applications can easily be gamed and exploited by adversaries.

Benevolence (the degree to which an agent is believed to do good to the person affected by its decision) and integrity (an agent’s actions are aligned with the values of that person) are a major challenge for AI systems as they do not know what is good and what is not and have no inherent values. This makes them prone to unethical behaviour.

The fact that AI agents collude or may discriminate are examples for this. It is the role of the designer of the AI to ensure that it consistently acts for the good of the people and incorporates the required values into the decision making.

Using unbiased data to calibrate the models and defining an appropriate payoff function are necessary but often not sufficient to achieve this goal. Further, human values are not easily translatable into a quantitative measure that an AI model can optimize. This, and more generally the fact that AI today is not able to pursue abstract goals, limits its applications if benevolence and integrity are not to be undermined.

For example, AI system should not be responsible for risk management in a financial firm, design economic policies, or autonomously supervise a financial firm. Transparency plays a role for trust as well. It helps to assess the benevolence and integrity of an AI. Neither the trustworthiness of the developer nor that of the algorithm are sufficient for an AI system to be trustworthy. Rather the trustworthiness has to be reviewed holistically and the ethics of an algorithm can only be evaluated in the assemblages of computer code, human practices, and norms.

Read the full paper

Related Posts

Previous Post
ESRB warns of European collateral shortages following corporate bond downgrades
Next Post
Should bond holders care if financing trades are made as securities loans or repo?

Fill out this field
Fill out this field
Please enter a valid email address.


Reset password

Create an account