Farid, Nonnecke suggest alternative to Section 230 for regulating harmful content

Phone screen displaying several social media app icons.

Due to Section 230 of the Communications Decency Act, social media platforms are largely immune from being held liable for third-party or recommended content. However, in recent years — and as evident with Google v. Gonzalez — courts are failing to distinguish between harmful content and harmful design choices.

As Brandie Nonnecke, CITRIS Policy Lab founding director, and CITRIS principal investigator Hany Farid, UC Berkeley professor in the departments of electrical engineering and computer sciences and the School of Information, discuss in an essay in Wired, algorithms are negligently designed to not differentiate between neutral and harmful content and can actually often favor dangerous content.

Per Nonnecke and Farid, cases such as A.M. v. Omegle and Lemmon v. Snap offer a more promising precedent by holding platforms accountable for negligent design choices, focusing on product liability. They argue that by centering on product design instead of addressing the issue through Section 230, platforms can be held accountable and the root of the problem can be tackled.