The US Office of the Comptroller of the Currency, the Federal Reserve, Federal Deposit Insurance Corporation, Bureau of Consumer Financial Protection and National Credit Union Administration issued a request for information and comment on financial institutions’ use of artificial intelligence, including machine learning.
Questions center around the way that financial institutions are exploring AI-based applications in a variety of fields. Some of the uses of AI highlighted are:
- Flagging unusual transactions. This involves employing AI to identify potentially suspicious, anomalous, or outlier transactions (e.g., fraud detection and financial crime monitoring). It involves using different forms of data (e.g., email text, audio data – both structured2 and unstructured), with the aim of identifying fraud or anomalous transactions with greater accuracy and timeliness. It also includes identifying transactions for Bank Secrecy Act/anti-money laundering investigations, monitoring employees for improper practices, and detecting data anomalies.
- Risk management. AI may be used to augment risk management and control practices. For example, an AI approach might be used to complement and provide a check on another, more traditional credit model. Financial institutions may also use AI to enhance credit monitoring (including through early warning alerts), payment collections, loan restructuring and recovery, and loss forecasting. AI can assist internal audit and independent risk management to increase sample size (such as for testing), evaluate risk, and refer higher-risk issues to human analysts. AI may also be used in liquidity risk management, for example, to enhance monitoring of market conditions or collateral management.
- Textual analysis. Textual analysis refers to the use of NLP for handling unstructured data (generally text) and obtaining insights from that data or improving efficiency of existing processes. Applications include analysis of regulations, news flow, earnings reports, consumer complaints, analyst ratings changes, and legal documents.
- Cybersecurity. AI may be used to detect threats and malicious activity, reveal attackers, identify compromised systems, and support threat mitigation. Examples include real-time investigation of potential attacks, the use of behavior-based detection to collect network metadata, flagging and blocking of new ransomware and other malicious attacks, identifying compromised accounts and files involved in exfiltration, and deep forensic analysis of malicious files.
The RFI also noted that AI may present particular risk management challenges to financial institutions in the areas of explainability, data usage, and dynamic updating:
- Explainability refers to how an AI approach uses inputs to produce outputs. Some AI approaches can exhibit a “lack of explainability” for their overall functioning (sometimes known as global explainability) or how they arrive at an individual outcome in a given situation (sometimes referred to as local explainability). Lack of explainability can pose different challenges in different contexts. Lack of explainability can also inhibit financial institution management’s understanding of the conceptual soundness of an AI approach, which can increase uncertainty around the AI approach’s reliability, and increase risk when used in new contexts. Lack of explainability can also inhibit independent review and audit and make compliance with laws and regulations, including consumer protection requirements, more challenging.
- Data plays a particularly important role in AI. In many cases, AI algorithms identify patterns and correlations in training data without human context or intervention, and then use that information to generate predictions or categorizations. Because the AI algorithm is dependent upon the training data, an AI system generally reflects any limitations of that dataset. As a result, as with other systems, AI may perpetuate or even amplify bias or inaccuracies inherent in the training data, or make incorrect predictions if that data set is incomplete or non-representative.
- Dynamic Updating. Some AI approaches have the capacity to update on their own, sometimes without human interaction, often known as dynamic updating. Monitoring and tracking an AI approach that evolves on its own can present challenges in review and validation, particularly when a change in external circumstances (e.g., economic downturns and financial crises) may cause inputs to vary materially from the original training data. Dynamic updating techniques can produce changes that range from minor adjustments to existing elements of a model to the introduction of entirely new elements.
In addition, the RFI highlights risks from broader or more Intensive data processing and usage. Like other systems, AI is designed to interact directly with training data to identify correlations and patterns and use that information for prediction or categorization. This means that data quality is important for AI. If the training data are biased or incomplete, AI may incorporate those shortcomings into its predictions or categorizations.
Overfitting can occur when an algorithm “learns” from idiosyncratic patterns in the training data that are not representative of the population as a whole. Overfitting is not unique to AI, but it can be more pronounced in AI than with traditional models. Undetected overfitting could result in incorrect predictions or categorizations.
AI may use alternative datasets in certain applications (such as credit underwriting, fraud detection, and trading) in ways that can assist in identifying related trends or predictions that may be difficult to identify with traditional methods. The importance of practices such as data quality assessments to determine relevance and suitability of data used in a model, may be heightened in the use of AI. Finally, in many cases, AI developers process or optimize raw data so that the data can be better used for training. Various data processing techniques exist, some of which may affect performance.