This project will develop computer vision techniques that can be used to assist people whose are hampered by disability in movements. For example, blind people may experience difficulty moving and finding their way in a previously unexplored environment due to a lack of visual clues. By creating technology that promotes semantic spatial awareness in man-made environments through direct access to text. Most research in autonomous mapping uses a homogenous approach with only one type of spatial representation (metric, topological, or appearance-based). However, humans routinely use multiple spatial representations when doing things like giving directions; for example, “Go straight for half a mile, then turn left at the statue.” Therefore, a system that uses multiple ways to represent space may be of use in guiding humans and robots alike. The work is a collaboration between UC Merced and UC Santa Cruz, encompassing computer vision, robotics, and user studies.
Related Projects
ACTIVATE
Research partners from Health Tequity, CITRIS Health, MITRE Corp., UC Davis and UC Merced worked with health care teams at Livingston Community Health, patients and community members to identify digital health barriers and co-create new ways to address them.
Beyond the Brink
Drought, climate change, an aging infrastructure and growing population threaten the water California’s San Joaquin Valley uses to supply most of the nation’s produce and […]
UC WATER Security and Sustainability Research Initiative: Innovation for a Resilient Water Future
The UC WATER Security and Sustainability Research Initiative is focused on strategic research to build the knowledge base for better water resources management by applying: […]