CITRIS affiliate talks AI responsibility in The Guardian

AI brain on blue background.

What happens when AI falls into the wrong hands?

That’s the question posed by David Evan Harris, an affiliated scholar with the CITRIS Policy Lab, in a recent op-ed in The Guardian about the rise of the Large Language Model Meta AI (LLaMA), Meta’s branded version of a large language model (LLM), which may become a common LLM used to develop AI platforms.

Harris notes that Meta’s semi-open source LLaMA and related LLMs can be run by anyone with sufficient computer hardware to support them, which allows a sizeable population the freedom to run the AI without any safety systems in place. He warns that the AI could be used to promote misinformation by making fake content more convincing, or by writing scripts for deepfakes that synthesize videos of political candidates saying things they never said.

“When AI is in the hands of people who are deliberately and maliciously abusing it, the risks of misalignment increase exponentially, compounded even further as the capabilities of AI increase,” he writes.