Meet the Most Nimble-Fingered Robot Yet

Ken Goldberg’s AUTOLAB Research on Robot Grasping featured in MIT Tech Review

Meet the Most Nimble-Fingered Robot Yet

Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is that the robot got so good at grasping by working with virtual objects.

The robot learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

The bot is significantly better than anything developed previously. In tests, when it was more than 50 percent confident it could grasp an object, it succeeded in lifting the item and shaking it without dropping the object 98 percent of the time. When the robot was unsure, it would poke the object in order to figure out a better grasp. After doing that it was successful at lifting it 99 percent of the time. This is a significant step up from previous methods, the researchers say.

The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes (see “10 Breakthrough Technologies 2017: Robots That Teach Each Other”). It is described in a paper to be published at a major robotics conference held this July.

Many researchers are working on ways for robots to learn to grasp and manipulate things by practicing over and over, but the process is very time-consuming. The new robot learns without needing to practice, and it is significantly better than any previous system. “We’re producing better results but without that kind of experimentation,” says Ken Goldberg, a professor at UC Berkeley who led the work. “We’re very excited about this.”

Instead of practicing in the real world, the robot learned by feeding on a data set of more than a thousand objects that includes their 3-D shape, visual appearance, and the physics of grasping them. This data set was used to train the robot’s deep-learning system. “We can generate sufficient training data for deep neural networks in a day or so instead of running months of physical trials on a real robot,” says Jeff Mahler, a postdoctoral researcher who worked on the project.

Goldberg and colleagues plan to release the data set they created. Public data sets have been important for advancing the state of the art in computer vision, and now new 3-D data sets promise to help robots advance.

Stefanie Tellex, an assistant professor at Brown University who specializes in robot learning, describes the research as “a big deal,” noting that it could accelerate laborious machine-learning approaches.

“It’s hard to collect large data sets of robotic data,” Tellex says. “This paper is exciting because it shows that a simulated data set can be used to train a model for grasping.  And this model translates to real successes on a physical robot.”

Advances in control algorithms and machine-learning approaches, together with new hardware, are steadily building a foundation on which a new generation of robots will operate. These systems will be able to perform a much wider range of everyday tasks. More nimble-fingered machines are, in fact, already taking on manual labor that has long remained out of reach (see “A Robot with Its Head in the Cloud Tackles Warehouse Picking”).

Russ Tedrake, an MIT professor who works on robots, says a number of research groups are making progress on much more capable dexterous robots. He adds that the UC Berkeley work is impressive because it combines newer machine-learning methods with more traditional approaches that involve reasoning over the shape of an object.

The emergence of more dexterous robots could have significant economic implications, too. The robots found in factories today are remarkably precise and determined, but incredibly clumsy when faced with an unfamiliar object. A number of companies, including Amazon, are using robots in warehouses, but so far only for moving products around, and not for picking objects for orders.

The UC Berkeley researchers collaborated with Juan Aparicio, a research group head at Siemens. The German company is interested in commercializing cloud robotics, among other connected manufacturing technologies.

Aparicio says the research is exciting because the reliability of the arm offers a clear path toward commercialization.

Developments in machine dexterity may also be significant for the advancement of artificial intelligence. Manual dexterity played a critical role in the evolution of human intelligence, forming a virtuous feedback loop with sharper vision and increasing brain power. The ability to manipulate real objects more effectively seems certain to play a role in the evolution of artificial intelligence, too.

By Will Knight, Senior Editor, AI
MIT Technology Review

Originally published May 25, 2017 in MIT Technology Review

UC Berkeley’s AUTOLAB, directed by Professor Ken Goldberg, is a world-renowed center for research in robotics and automation sciences, with 30+ postdocs, PhD and undergraduate students pursuing projects in Cloud Robotics, Deep Reinforcement Learning, Learning from Demonstrations, Computer Assisted Surgery, Automated Manufacturing, and New Media Artforms. Sponsors Include: NSF, USDA, DARPA, Google, Siemens, Intuitive Surgical, Autodesk, Samsung, Cisco, IBM, and CloudMinds. AUTOLAB Research Papers:  http://goldberg.berkeley.edu/pubs/