NIST publishes draft AI risk management framework

To promote the development and use of AI technologies and systems that are trustworthy and responsible, the National Institute of Standards and Technology (NIST) is seeking public comment on an initial draft of the AI Risk Management Framework (AI RMF). The draft addresses risks in the design, development, use, and evaluation of AI systems.

The voluntary framework is intended to improve understanding and to help manage enterprise and societal risks related to AI systems throughout the AI lifecycle, offering guidance for the development and use of trustworthy and responsible AI. NIST is also developing a companion guide to the AI RMF with additional practical guidance.

NIST also released “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence”, which offers background and guidance for addressing one of the major sources of risk that relates to the trustworthiness of AI. That publication explains that beyond the machine learning processes and data used to train AI software, bias is related to broader societal factors – human and systemic institutional in nature – which influence how AI technology is developed and deployed.

Access the AI RMF draft

Related Posts

Previous Post
Euroclear invests in wholesale markets DLT settlement firm Fnality
Next Post
FSB warns on bigtech and fintech opacity and risk taking

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account