Robots Rush In: In Search-and-Rescue Operations Teamwork is Everything

by Gordy Slack

Two P3AT robot equipped with Sick laser and sonars. These robots could be sent into emergency situations to help police determine the situation.

At the scene of a disaster—whether natural or man-made—knowledge is power.  And not knowing what the situation is can leave emergency responders powerless, as the SWAT teams and police who waited outside luxury hotels in Mumbai in November can attest. Moving forward, vital knowledge about what is going on inside earthquake-damaged buildings, structures on fire, and buildings under siege will be provided by robots, and CITRIS researchers are working to further this field.

The first serious forays into search-and-rescue robotics began shortly after the 1995 earthquake in Kobe, Japan. More than 200,000 buildings collapsed in that tragedy and thousands of people died instantly. But even more died in the aftermath, trapped undiscovered for hours or days inside quake-damaged buildings.

"After a disaster like that, getting to people quickly is the first responder’s biggest challenge," says Stefano Carpin, an assistant professor of computer science at UC Merced and the director of the robotics lab. “The number of people you can rescue from destroyed buildings dramatically drops after 72 hours; you need to do whatever you can to locate survivors before then.”

But rushing into damaged buildings is dangerous and can endanger not only rescue workers but also the victims they are working to save. Sending in robots that are equipped with various kinds of sensors to do reconnaissance is much safer, and these robots can search for signs of life and report back to waiting operators.

“The idea is not to replace first responders with robots, but to collect as much information as possible so that first responders can do their jobs better without being exposed to unnecessary risks," says Carpin.

The current generation of robots is mainly tele-operated, which means each operator is dedicated to only one robot. If you have thousands of buildings to examine, it can be very slow going.  If they are controlled by a human from outside the site, the robots will also need to be in constant communication with their operators. In disaster environments, radio signals can easily be lost. And in a situation such as a post-earthquake collapse, there might not be any recognizable landmarks inside the damaged structure to guide a robot and keep it within signal-receiving range. Power or transmission cords, on the other hand, are cumbersome and easily snagged.

Professor Stefano Carpin develops teams of intelligent robots (overseen by a single human operator) that can work together and keep track of both their own locations and each other’s.

Carpin has a solution. “Almost all the research we do at the Merced lab deals with having multiple robots cooperating for a shared goal,” he says. He and his colleagues develop teams of intelligent robots (overseen by a single human operator) that can work together and keep track of both their own locations and each other’s.

First responders can cover a big area much faster if the robots can coordinate their efforts and merge the information they collect. The robots can use each other as navigational points when they do not have contact with their human commander. This will allow a single, well-trained operator to turn the team of robots loose and pay close attention only to the ones that find signs of life or death. Meanwhile, the robots will share information with each other, compiling a model of the disaster environment. It turns out that putting together a single geographical model from multiple moving sources is no small trick and requires complex mathematical algorithms, subtle programming, and advanced engineering.

Carpin is planning to use new types of sensors, still under development elsewhere, that will be put in place quickly in a disaster environment. Like Hansel and Gretel, the robots will distribute the sensors as they move into a new area, establishing a grid that they can refer to for the rest of their mission.

However, even with the grid in place, coordinating the robots and building a model of the scene from their various reports is a big challenge. “It is a multidimensional position problem,” says Carpin. “You have many very different constraints that you must satisfy at the same time. That is where the tricky part comes…and also the excitement.”

When robots are put into a new big environment and asked to explore it, it would not make sense for two robots to go in the same direction. They should distribute themselves throughout the place efficiently. But if it is a damaged building, say, with fallen walls and dust everywhere, the robots can only learn as they go about what that their environment is like. By working as a team pursuing the same goal and not as selfish individuals, the robots will build a model of the place, and discover whom and what is in it orders of magnitude faster than current methods, says Carpin.

It is difficult to compile and make sense of sometimes contradicting reports from different robots. On the other hand, with multiple points of view, if scientists can overcome the obstacles to integrating them, they would get a representation that has much more value than the individual pieces would alone.

“It is a case where one plus one equals much more than two,” says Carpin.  “What made no sense in one robot's view can be clearly interpreted from a second robot's point of view or informed by a different kind of sensor. But all this needs to be done quickly, in real time, and the mathematical models to accommodate all this information are tricky.”

Some robots will be equipped with cameras to search for motion and detectable figures. Some of those cameras will be infrared, because disaster scenes are often dark, or covered with dust and colorless. Infrared cameras can identify human figures by their warmth.  Other robots will have listening devices or instruments that “smell” C02.  All of this information will be gathered and then “glued together,” says Carpin, into a single model of the environment that the emergency response team can use to figure out whether to enter a building and how to do so with the least risk. And it is in the “gluing together” that Carpin’s Merced team exceeds.

Last year, in Suzhou, China, the team came in second place in the search-and-rescue portion of the prestigious international RoboCup competition, which pits different search-and-rescue systems against each other.

Patching all these pieces of the data puzzle together can make software pretty unstable, but the Merced team’s program, developed with Microsoft’s Robotics Studio software, was solid as a rock, says Carpin. The interface for the competition was developed by Carpin’s graduate student Ben Balaguer, whose research was also supported by Microsoft.

In addition to operating more autonomously and more cooperatively, says Carpin, the next generation of search-and-rescue robots will also be more user-friendly, a direct byproduct of the RoboCup competitions, where engineers and end users, such as firefighters, discuss real life scenarios and needs. “The user interface has to be simple enough for firefighters to use without too much training,” says Carpin. "We have begun closing the link between the first responders and the scientists."

UC Merced Robotics Lab: