At first, humility may seem like an unexpected concept to encounter in the world of AI. Most of the conversation around innovation in AI has been about all the exceptional things AI can do, such as its capacity to make processes better, smarter, and faster than ever before, as well as automating aspects of our day-to-day lives, and even impacting life-altering decisions. Developments over the past several decades have enabled the complex algorithms underlying AI to learn patterns from data and make increasingly more accurate predictions.
In recent years, those technologies have become even more accessible. Automated machine learning has been democratizing the creation of AI systems. It’s no longer just data scientists with advanced backgrounds in programming, mathematics, and statistics who build AI systems, but also empowered business analysts with the raw data, the right questions, and the newest tools at their disposal.
As AI systems proliferate, it’s not enough just to be accurate anymore. The cutting edge must evolve beyond the hunt for pure accuracy to encompass the qualities and technologies that result in a more responsible, ethical, and trustworthy system. In the high-stakes and high-value settings where AI is being applied, we need systems whose behavior we can fully explain and whose outcomes, in specific instances, we can exercise control over.
In critical situations, the ability of an AI to demonstrate humility becomes paramount. Under what circumstances might an AI system be less confident in a prediction? What should be done when it is unsure? Instead of pretending all predictions are delivered to us with the same level of confidence, how do we quantify a prediction’s uncertainty and take that into consideration in our decision-making?