Since Rosie the Robot first debuted on television’s “The Jetsons” in 1962, the futuristic image of a personal robot autonomously operating in a human home has captivated the public imagination. Yet, while robots have become an integral part of modern industrial production, their adoption in these less structured and less controlled environments has been slow. The Personal Robotics Project headed up by UC Berkeley Professor Pieter Abbeel focuses on building the Artificial Intelligence needed to enable reliable robotic perception and manipulation in such unstructured environments, with the penultimate goal of enabling robots to serve in our homes.
In highly structured settings, modern-day robots can be scripted to perform a wide variety of tasks with precision and repeatability. Outside of carefully controlled settings, robotic capabilities are much more limited. Indeed, for a robot "simply" grasping a modest variety of previously unseen objects in real-world cluttered environments is a non-trivial task.
In their Personal Robotics Project, Professor Abbeel and his students have started to tackle the chore of doing laundry---starting with a basket with dirty laundry, ending with laundry articles nicely folded, ironed, and hung or stacked away. Their work fits in a larger effort by eleven institutions each of which advancing the state of the art in personal robotics, while working with the same platform, the Willow Garage PR2 robot (shown in the video below).
Robotic laundry requires dealing with non-rigid objects which poses a number of perceptual and manipulation challenges. Perhaps the biggest challenge facing robotic laundry manipulation is how to bring a clothing article into a known configuration from an arbitrary initial state. To develop perceptual capabilities, e.g., whether a sock is inside-out or not, or to recognize a t-shirt and how it's layed out (or bunched up), they use a combination of physics-based models for clothing and machine learning. Their machine learning algorithms enable "training'' the robot by presenting examples. For example, to teach the robot whether a sock is inside out or not, they present the robot with 100 pictures of right-side-out socks and 100 pictures of inside-out socks. Their machine learning algorithm then infers the distinction between the digital image of the inside of a sock and a digital image of the outside of a sock. The physics-based models provide the robot with an internal simulation model of how a particular clothing article might behave. For example, to verify whether it might be holding a towel by two diagonally across corners points, the robot could simulate how the towel would hang when this is the case. It can then verify how consistent this simulation is with the digital image it has taken and accordingly update the probability of that hypothesis being the correct one.
The videos — posted further below on this page — showcase some of their results. In each of the videos the robot is operating autonomously. The first video shows the robot faced with a heap of towels it’s never “seen” before. The towels are of different sizes, colors and materials. The robot picks one up and turns it slowly, first with one arm and then with the other. It uses a pair of high-resolution cameras to scan the towel to estimate its shape. Once it finds two adjacent corners, it can start folding. On a flat surface, it completes the folds — smoothing the towel after each fold, and making a neat stack. The second video shows the robot faced with a set of socks, each of which could be inside-out or not. The robot starts by inspecting the socks one by one. When it finds an inside-out sock, it flips it right-side-out. Once all socks are right-side out, it looks for matching pairs and bunches them together.
In current work Abbeel and his students are extending their results for towels and socks to a wider variety of clothing articles, as well as looking into enabling the robot to operate washer and dryer.
** video of folding 5 towels **
** video of sorting socks **