Farid addresses rise of deepfakes in Nature 

An overlay of synthetic faces depicted on the left, and a blurry overlay of real faces shown at right produces no distinct facial features.
An overlay of synthetic faces (left) shows more regularity than one composed of real images. Image courtesy of Hany Farid

In a recent feature story in the scientific journal Nature, CITRIS PI and UC Berkeley professor Hany Farid commented on the proliferation — and danger — of deepfakes: falsified images and videos generated by “deep learning” AI that mimics the learning process of the human brain to create realistic media.

Farid and a number of other researchers are focusing on a combination of AI-based tools and human insight to expose fraudulence and keep up with developers, who have been able to decrease how detectable these falsified images are in a matter of weeks or months.

CITRIS’s Brandie Nonnecke spoke to Farid about strategies that can help keep people safer from deepfakes on TecHype, the CITRIS Policy Lab podcast and video series about emerging technology. Watch that episode now:

According to Farid, although it will be impossible to catch all instances, thorough detection methods have the potential to largely mitigate the damage caused by deepfakes.