Steered by demand for automation, efficiency, and personalization and availability of intense and diversified data, financial institutions have started leveraging AI across the value chain. An IDC spending guide says that global spending on artificial intelligence (AI) is going to double over the next four years, reaching more than $110 billion in 2024. Not just in financial decision making, AI models are present across different touchpoints – customer acquisition, detection of fraudulent transactions, emotional analytics, and leveraging alternative datasets associated with financial transactions.
As organizations are adopting AI rapidly in various ways, the demand for nimble decisions and accuracy in machine driven complex decision-making is also growing. So does usage of black box AI systems. This, in turn, necessitates the need for fair decision engine that will help organizations protect their reputation and customer base from potential vulnerabilities of opaque decision models.
These models, due to lack of interpretability, may have unprecedented financial implication as well as impact on goodwill. To avoid this, financial and risk models need to be reoriented such that they be more: 1) transparent 2) auditable 3) analogous and 4) explainable.
To attain these attributes, institutions need to reimagine the traditional model risk management framework. Governance plans to ensure fair decision systems need to span across model data and algorithm, model responses and business impacts. These new governance aspects of model risk management components will include: 1) model oversight and control 2) model outcome validation 3) model data validation.