Nonnecke weighs in on AI companies’ self-regulation 

Angled view of the Capitol Hill on the right surrounded by greenery and cherry blossom trees on the left.

On July 21, 2023, seven leading companies in artificial intelligence (AI) agreed to eight voluntary commitments for the safe development of AI. One year later, Brandie Nonnecke, director of the CITRIS Policy Lab, provides expert insight on their progress for the MIT Technology Review.

Up until recently, AI has been largely self-regulated by tech developers, and these voluntary commitments came as a first response to growing concerns about making AI an ethical and responsible addition to society. A White House executive order has since expanded the scope of the commitments, and both positive progress and areas for improvement have been identified. In the absence of more comprehensive federal legislation, Nonnecke says the U.S. should demand that companies follow through in their agreement and proceed with vigilance.

“These are still companies that are essentially writing the exam by which they are evaluated,” she cautions. “So we have to think carefully about whether or not they’re … verifying themselves in a way that is truly rigorous.” 

Read more from the MIT Technology Review.