NIST releases privacy framework, defines “AI”

The National Institute of Standards and Technology (NIST) has released Version 1.0 of the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management, developed in collaboration with a range of stakeholders. It provides a useful set of privacy protection strategies for organizations that wish to improve their approach to using and protecting personal data. The publication also provides clarification about privacy risk management concepts and the relationship between the Privacy Framework and NIST’s Cybersecurity Framework.

“Privacy is more important than ever in today’s digital age,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan. “The strong support the Privacy Framework’s development has already received demonstrates the critical need for tools to help organizations build products and services providing real value, while protecting people’s privacy.”

In a separate announcement, NIST said it was hammering out a narrow working definition specifically for the satisfaction of the Executive Order on Maintaining American Leadership in Artificial Intelligence, which makes the agency responsible for providing guidance to the federal government on how it should engage in the standards arena for AI.

“We acknowledge that there are multiple definitions out there, but from our perspective, an AI system is one that exhibits reasoning and performs some sort of automated decision-making without the interference of a human,” said NIST Information Technology Lab director Chuck Romine in a NIST blog posting.

He added: “I think one of the things that is going to be necessary for us is pulling out the desirable characteristics like usability, interoperability, resilience, security, privacy and all the things that will require a certain amount of care to build into the systems, and get innovators to start incorporating them. Guidance and standards can help to do that. I think there’s general agreement that guidance will be needed for interoperability, security, reliability, robustness, these characteristics that we want AI systems to exhibit if they’re going to be trusted.”

Related Posts

Previous Post
Bank of England blog: Margin call! Cash shortfall?
Next Post
SIFMA Video: Why Capital Markets Matter

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account