RMA: why algos may be trained with biases

In an ideal world, everyone would want their models to be fair and no one would want them to be biased, but what does that mean exactly? Unfortunately, there seems to be no universal and agreed upon definition of bias or fairness.

In the following interview between Fran Garritt of RMA and Kevin Oden of Kevin D. Oden & Associates, they discuss the challenges of model risk management with a focus on consumer credit and machine learning.

Garritt: First, can you define bias in machine learning?

Oden: The best way to define bias in machine learning is really to think about how unfairness occurs in all modelling frameworks, machine learning just being one of them, and it is really bias or unfairness as an outcome of the model. So, when you think about fair – not showing favoritism towards an individual or group based on their intrinsic or acquired traits in the context of decision making is what fair means – and unfair is anything else. When we think about the models that we employ; they may have unfair outcomes.

Detecting that and correcting for it is one of the major tasks today as we use models more and more. Formulating fairness quantitatively in a model setting and a machine learning setting is difficult. But typically, one should start with the laws that are out there because there are laws out there which every bank has to adhere to.

Garritt: What do you consider are the contributing factors for bias in these models?

Oden: Typically, these models are trained on data and this data has been around in some cases for years and in some cases for decades in making these decisions. Unfortunately, people consciously or subconsciously are prone to bias in their decision-making process. So, it shows up in the data.

As an example, if a credit decision is made by an individual and that individual tends to give credit to people that they know or they like or they feel familiar with, then that becomes the training data for the credit decisioning models on a go forward basis. When it is automated, you basically have automated prejudice in that model or bias in that model.

Read the full interview

Related Posts

Previous Post
BrokerTec US Repo ADNV up 20% to $287bn in March yoy
Next Post
DataLend: securities lending revenue up 32% to $836mn in March yoy

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account