AI needs human-in-the-loop ethics. Is this a job for regulators?

Last year, the passing of a former Russian officer who averted nuclear disaster served as a reminder why “human-in-the-loop” ethics for artificial intelligence and machine learning isn’t merely jargon.

In 1983, Stanislav Petrov was on duty at a Russian nuclear early warning center when he ignored a computer’s “Launch” command in response to incoming US missiles, which turned out on investigation to have been sunlight on clouds.

Admittedly, financial services aren’t quite as dramatic as nuclear warfare, but there are still dire consequences when machines get it wrong, like automatically having your home foreclosed on.

A couple of months ago, Wells Fargo self-disclosed in a filing that an undiscovered, five-year old computer glitch resulted in more than 600 customers being denied easier terms on their mortgages and, of those, about 400 went on to lose their homes.

It’s exactly the kind of situation that regulators are not paying enough attention to, said Tom Doris, chief data scientist at Liquidnet.

He explained that those individuals who were short-changed did attempt to rectify the situation: “They were on the right side of the terms of agreement, and they couldn’t reach a human. It is high time that regulation stepped in and said, this is a standard duty of care, it is implied, and it’s simply not being delivered in most cases.”

News mentions of AI and ethics increased ~5000% from 2014 to 2018, when they reached over 250 mentions in Q3 2018, according to CB Insights data. 


This content requires free registration (unlocked content) or a Finadium subscription. Log in or get access today by signing up here.

Related Posts

Previous Post
If brokers move their business to fintech firms, FINRA may not have oversight. Is this a good thing?
Next Post
SIX licenses SARON to Eurex for futures contracts

Related Posts

1 Comment. Leave new

Fill out this field
Fill out this field
Please enter a valid email address.


Reset password

Create an account