Updated AI risk standards help identify, analyze potential harms

Close-up of computer hardware in shades of light and dark blue.

The first annual update to the AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models (Version 1.1) has been published by a team of researchers affiliated with the UC Berkeley Center for Long-Term Cybersecurity (CLTC) and the CITRIS Policy Lab.

Brandie Nonnecke, director of the CITRIS Policy Lab, co-authored the report alongside CLTC colleagues Anthony Barrett, Dan Hendrycks, Krystal Jackson, Nada Madkour, Evan R. Murphy, Jessica Newman and Deepika Raman.

GPAIS and foundation models have the potential to behave unpredictably and introduce novel societal risks if their early development is unregulated. Targeting upstream developers of large-scale AI systems, the report provides guidelines to identify, analyze and mitigate risks associated with the emergent technology without compromising its benefits.

The updated report expands its scope and covers a broad range of potential harms, including racial bias, environmental harm, destruction of critical infrastructure and degradation of democratic institutions.

Read more from the UC Berkeley Center for Long-Term Cybersecurity.