As artificial intelligence (AI) is used in ways that will have increasingly consequential impacts on people’s lives, there is an urgent need for policymakers and industry leaders to align around best practices for mitigating the potential risks of AI bias.
BSA, a software membership and advocacy alliance, is calling on governments to pass legislation to require private sector companies to perform impact assessments on high-risk uses of AI technologies. To aid governments in this effort, BSA today released Confronting Bias: BSA’s Framework to Build Trust in AI.
The framework details how organizations can perform impact assessments to identify and then mitigate risks of bias that may emerge throughout an AI system’s lifecycle, with more than 50 diagnostic statements specifying actions for companies to take, covering the design, development, and deployment of an AI service.
Victoria Espinel, president and CEO at BSA, said in a statement: “Policymakers in the European Union have already begun work on legislation to regulate AI, and we will continue to work with the EU, with leaders in the US, and with policymakers around the globe to build the right approach and pass it into law.”
“AI has the potential to reshape industries and improve quality of life around the globe. But, in the absence of key safeguards, AI can also create feedback loops that may entrench and exacerbate historical inequities,” said Christian Troncoso, senior director in the Policy unit at BSA, in a statement. “Like cybersecurity and privacy, managing the risks of AI bias requires an organizational commitment that extends throughout a system’s lifecycle.”