In an effort to counter the often pernicious effect of biases in artificial intelligence (AI) that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases.
NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent.
The approach the authors propose for managing bias involves a conscientious effort to identify and manage bias at different points in an AI system’s lifecycle, from initial conception to design to release. The goal is to involve stakeholders from many groups both within and outside of the technology sector, allowing perspectives that traditionally have not been heard.
Historical, training data, and measurement biases are “baked-in” to the data used in the algorithmic models underlying those types of decisions. Such biases may produce unjust outcomes for racial and ethnic minorities in multiple areas, in cluding financial decisions.
The approach for identifying and managing AI bias proposed in this report is adapted from current versions of the AI lifecycle and consists of three distinct stages, and presumed accompanying stakeholder groups.
1. Pre-design: where the technology is devised, defined and elaborated
2. Design and development: where the technology is constructed
3. Deployment: where technology is used by, or applied to, various individuals or groups.