Scrolling through content selected “just for you” on social media feeds seems like a harmless pastime. But new research shows that some algorithms go from recommending content that matches our preferences to recommending content that shapes our preferences in potentially harmful ways.
A study by a UC Berkeley-led team revealed that certain recommender systems try to manipulate user preferences, beliefs, mood and psychological state. In response, the researchers proposed a way for companies to choose algorithms that more closely follow a user’s natural preference evolution.
The work was conducted at the laboratory of CITRIS PI Anca Dragan, with Micah Carroll, a Ph.D. student at the Berkeley Artificial Intelligence Research (BAIR) Lab; CITRIS PI Stuart Russell; and Dylan Hadfield-Menell, now an assistant professor at MIT.