NIST launches AI Safety Institute with looming US enforcement mandate

The US Department of Commerce’s National Institute of Standards and Technology (NIST) is calling for participants in a new consortium supporting development of innovative methods for evaluating artificial intelligence (AI) systems to improve the rapidly growing technology’s safety and trustworthiness. This consortium is a core element of the new NIST-led US AI Safety Institute.

The institute and its consortium are part of NIST’s response to the recently released US White House Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of AI. The EO tasks NIST with a number of responsibilities, including development of a companion resource to the AI Risk Management Framework (AI RMF) focused on generative AI, guidance on authenticating content created by humans and watermarking AI-generated content, a new initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, and creation of test environments for AI systems.

NIST will rely heavily on engagement with industry and relevant stakeholders in carrying out these assignments. The new institute and consortium are central to those efforts.

“The US AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio. “Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

“The institute’s collaborative research will strengthen the scientific underpinnings of AI measurement so that extraordinary innovations in artificial intelligence can benefit all people in a safe and equitable way,” said NIST’s Elham Tabassi, federal AI standards coordinator and a member of the National AI Research Resource Task Force, in a statement.

The expertise being called for falls in one or more of several specific areas, including AI metrology, responsible AI, AI system design and development, human-AI teaming and interaction, socio-technical methodologies, AI explainability and interpretability, and economic analysis; Models, data and/or products to support and demonstrate pathways to enable safe and trustworthy AI systems through the AI RMF.

In early November, the US Congress introduced the Federal Artificial Intelligence Risk Management Act, which would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.

Source

Related Posts

Previous Post
FSB publishes toolkit for enhancing third-party risk management and oversight
Next Post
BrokerTec US and EU repo November volumes flat from last month, down yoy

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account