- FSB report notes that the rapid adoption of artificial intelligence (AI) offers several benefits but may also amplify certain financial sector vulnerabilities, such as third-party dependencies, market correlations, cyber risk and model risk, potentially increasing systemic risk.
- While existing financial policy frameworks address many of the vulnerabilities associated with use of AI by financial institutions, more work may be needed to ensure that these frameworks are sufficiently comprehensive.
- Report calls for financial authorities to enhance monitoring of AI developments, assess whether financial policy frameworks are adequate, and enhance their regulatory and supervisory capabilities including by using AI-powered tools.
The Financial Stability Board (FSB) published a report on the financial stability implications of AI, which outlines recent developments in the adoption of AI in finance and their potential implications for financial stability.
Widespread adoption and more diverse use cases of AI have prompted the FSB to revisit previous assessments. Financial firms currently use AI mainly to enhance internal operations and improve regulatory compliance, but generative AI (genAI) and large language models have given rise to new use cases, such as document summarization, information retrieval, and code generation. While many financial institutions appear to be taking a cautious approach to using genAI, interest remains high and the technology’s accessibility could facilitate more rapid integration in financial services.
Financial authorities are also using AI for more efficient supervision. The fast pace of innovation and AI integration in financial services, along with limited data on AI usage, poses challenges for monitoring vulnerabilities and potential financial stability implications.
The report notes that AI offers benefits from improved operational efficiency, regulatory compliance, personalized financial products and advanced data analytics. However, AI may also amplify certain financial sector vulnerabilities and thereby pose risks to financial stability.
Several AI-related vulnerabilities stand out for their potential to increase systemic risk. These include:
(i) third-party dependencies and service provider concentration;
(ii) market correlations;
(iii) cyber risks; and
(iv) model risk, data quality and governance.
In addition, genAI could increase financial fraud and disinformation in financial markets. Misaligned AI systems that are not calibrated to operate within legal, regulatory, and ethical boundaries can also engage in behaviour that harms financial stability. And from a longer-term perspective, AI uptake could drive changes in market structure, macroeconomic conditions and energy use that may have implications for financial markets and institutions.
The report notes that existing regulatory and supervisory frameworks address many of the vulnerabilities associated with AI adoption. However, more work may be needed to ensure that these frameworks are sufficiently comprehensive. To this end, the report calls on the FSB, standard-setting bodies and national authorities to:
(i) consider how to address data and information gaps to better monitor AI adoption and assess the related financial stability implications;
(ii) assess whether current financial policy frameworks are sufficient to address AI-related vulnerabilities both at domestic and international level; and
(iii) enhance regulatory and supervisory capabilities, for example by sharing information and good practices across border and sectors as well as leveraging AI-powered tools.