NIST Releases AI Risk Management Framework

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0) today, a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies. A press release has the background on this:

The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors. It is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” said Deputy Commerce Secretary Don Graves. “It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.” 

Interesting. Christopher Prewitt, CTO of Inversion6 had this comment:

There is a significant amount of motivation to get ahead of Artificial Intelligence. As we know, governments are often slow to develop guidance, laws, executive orders around technology. The focus of this technology and frankly all new technologies are around the value they create and the risks are often not identified or focused on. The NIST AI Risk Management is attempting to provide a structure around the risk identification and management processes, so organizations can more safely develop new AI based solutions.

I’ll be interested to see where this goes as AI is very much a top of mind topic at present.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading