The skyrocketing popularity of social media and advancement of consumer technology in recent years has come with a major pitfall: the equally unprecedented spread of misinformation. Digital fabrication is only getting easier, while user awareness and formal regulation are still in their preliminary stages. A network of UC Berkeley scholars, including CITRIS researchers Brandie Nonnecke and Hany Farid, are strategizing how to rein in disinformation across the digital landscape. While they believe that misinformation is unavoidable, their efforts are guided by the idea that there is still substantial room for its harm to be mitigated.
Regulation is one route to a safer internet.
Farid argues that a substantial portion of the problem of disinformation stems from recommender systems being programmed to optimize for engagement which, intentional or not, highlights extremism to keep users engaged. Bombardment with attention-grabbing news keeps us “clicking like a bunch of monkeys,” says Farid.
The nuance of finding a solution lies in balancing interests. Nonnecke, founding director of the CITRIS Policy Lab, cites the importance of establishing regulation without discouraging innovation. “It needs to be … a partnership with the platforms,” Nonnecke says. “I think we go down a precarious path if we start to have the government telling platforms what content they can and cannot carry.”
Farid, Nonnecke and their colleagues discuss a variety of solutions, including rewriting recommender formulae to encourage constructive discourse and following in the footsteps of other countries who have taken preventative action.