Last year, the passing of a former Russian officer who averted nuclear disaster served as a reminder why “human-in-the-loop” ethics for artificial intelligence and machine learning isn’t merely jargon.
In 1983, Stanislav Petrov was on duty at a Russian nuclear early warning center when he ignored a computer’s “Launch” command in response to incoming US missiles, which turned out on investigation to have been sunlight on clouds.
Admittedly, financial services aren’t quite as dramatic as nuclear warfare, but there are still dire consequences when machines get it wrong, like automatically having your home foreclosed on.
A couple of months ago, Wells Fargo self-disclosed in a filing that an undiscovered, five-year old computer glitch resulted in more than 600 customers being denied easier terms on their mortgages and, of those, about 400 went on to lose their homes.
It’s exactly the kind of situation that regulators are not paying enough attention to, said Tom Doris, chief data scientist at Liquidnet.
He explained that those individuals who were short-changed did attempt to rectify the situation: “They were on the right side of the terms of agreement, and they couldn’t reach a human. It is high time that regulation stepped in and said, this is a standard duty of care, it is implied, and it’s simply not being delivered in most cases.”
This content requires a Finadium subscription. Articles with an unlocked symbol can be accessed with free registration. Log in or create a free account by signing up here..
1 Comment. Leave new
[…] I suspect human-in-the-loop AI processes are our best version of the future. They have also been proposed to resolve ethical concerns. […]