BoE: machine learning explainability for default risk analysis

In a Bank of England staff working paper, researchers propose a framework for addressing the “black box” problem present in some Machine Learning (ML) applications. They implement the approach by using the Quantitative Input Influence (QII) method in a real world example: an ML model to predict mortgage defaults.

This method investigates the inputs and outputs of the model, but not its inner workings. It measures feature influences by intervening on inputs and estimating their Shapley values, representing the features’ average marginal contributions over all possible feature combinations. This method estimates key drivers of mortgage defaults such as the loan to value ratio and current interest rate, which are in line with the findings of the economics and finance literature. However, given the nonlinearity of the ML model, explanations vary significantly for different groups of loans.

They use clustering methods to arrive at groups of explanations for different areas of the input space, and conduct simulations on data that the model has not been trained or tested on. The main contribution of the research is to develop a systematic analytical framework that could be used for approaching explainability questions in real world financial applications. Researchers concluded though that notable model uncertainties do remain which stakeholders ought to be aware of.

Read the full working paper

Related Posts

Previous Post
Northern Trust Develops Machine Learning Based Forecasting Engine to Drive Securities Lending Revenue
Next Post
AcadiaSoft’s collateral tech is moving to cloud: what’s going to happen on-prem? (Premium)

Related Posts

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
X

Reset password

Create an account