Artificial intelligence (AI) and machine learning (ML) can improve operational efficiency and analytical outcomes but carries risks of “black box” decision-making and data, as well as programming deficiencies and biases. Regulation and internal governance can help reduce these risks.
Financial institutions use AI-based systems and ML to improve predictive models in operational risk management, including fraud detection, stress testing and provisioning, as well as credit assessment applications, such as credit scoring for loan underwriting and monitoring the performance of existing assets.
Regulatory reforms are being undertaken to address AI reliability and transparency issues. Underwriting criteria produced by AI models may be opaque, making it difficult to understand which factors drive the decision-making process. This also makes it difficult to compare AI model results with historical data in Fitch’s analysis of structured finance transactions.
The use of AI/ML can make data analysis and credit risk assessment more efficient, as it allows large quantities of data to be analyzed quickly, and may lead to the discovery of new risk segments or patterns by filtering through variables for significant predictors. AI/ML can also expand credit availability for those whom creditworthiness can be measured using nontraditional metrics. Smaller lenders are more likely to use AI to make credit decisions, perhaps to gain an edge over competitors, and are helped by access to cloud-based lending management systems.
The quality and volume of data used to train AI/ML systems directly affects the predictive accuracy of most AI models. Faulty or limited data and programmer biases can lead to erroneous AI/ML outcomes, resulting in poor origination quality, loosening underwriting practices or discriminatory credit decisions, with potential reputational and financial repercussions.