NIST’s AI risk management framework should address key societal-scale risks

Abstract with blue and red laser beams.

As the National Institute of Standards and Technology (NIST) works to develop its Artificial Intelligence Risk Management Framework, CITRIS Policy Lab Director Brandie Nonnecke and her UC Berkeley colleagues Anthony Barrett, Thomas Krendl Gilbert, Jessica Newman and Ifejesu Ogunleye have responded to NIST’s open request for information with recommendations to minimize risk to democracy and security, to human rights and well-being, and of global catastrophes.