Abbeel, Goldberg discuss rise of generative AI with TechCrunch

Illustration of a humanoid robot dissolving into small particles.

CITRIS researchers and UC Berkeley professors Pieter Abbeel and Ken Goldberg recently spoke on the recent rise of generative artificial intelligence (AI) in an interview in TechCrunch. 

In regard to how generative AI fits into the broader world of robotics, Abbeel noted that “it’s all about chasing the long tail of edge cases.” With large neural networks, he explained, they can keep absorbing information, helping make sense of an edge case.  

Goldberg added to these sentiments, citing the transformer network as the core concept when it comes to AI in robotics. Because the transformer looks at sequences, it is able to get very good at predicting the next item as it tries things out and learns over time.

“The whole thing in robotics has traditionally been logic or task planning, and people who have to program it in somehow have to describe the world in terms of logical statements that somehow come after each other, and so forth,” Abbeel said. “The language models kind of seem to take care of it in a beautiful way. That’s unexpected to many people.”