Nonnecke co-authors policy brief on AI risk management standards

Purple computer chip with textured blue and purple background.

Brandie Nonnecke, director of the CITRIS Policy Lab, and UC Berkeley Center for Long-Term Cybersecurity researchers Anthony Barrett and Jessica Newman recently released three strategy recommendations to help AI developers set policy, safety, security and ethics standards in conjunction with the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (RFM). Their policy brief suggests that U.S. and E.U. policymakers:

  1. Require developers of advanced AI system to adhere to appropriate AI risk management standards and guidance
  2. Ensure that general-purpose AI systems, foundation models and generative AI undergo sufficient prerelease evaluations to identify and mitigate risks of severe harm
  3. Ensure that AI regulations and enforcement agencies provide sufficient oversight and penalties for non-compliance

The researchers also provided recommendations in 2021 when NIST called for comments during its RFM drafting period.