Projects / Towards Semantic Spatial Awareness: Robust Text Spotting for Assistive Technology Applications

Towards Semantic Spatial Awareness: Robust Text Spotting for Assistive Technology Applications

Health

Improving health outcomes and access to cost-effective care through the development and integration of innovative technology in telehealth,...

Towards Semantic Spatial Awareness: Robust Text Spotting for Assistive Technology Applications

This project will develop computer vision techniques that can be used to assist people whose are hampered by disability in movements. For example, blind people may experience difficulty moving and finding their way in a previously unexplored environment due to a lack of visual clues. By creating technology that promotes semantic spatial awareness in man-made environments through direct access to text. Most research in autonomous mapping uses a homogenous approach with only one type of spatial representation (metric, topological, or appearance-based). However, humans routinely use multiple spatial representations when doing things like giving directions; for example, “Go straight for half a mile, then turn left at the statue.” Therefore, a system that uses multiple ways to represent space may be of use in guiding humans and robots alike. The work is a collaboration between UC Merced and UC Santa Cruz, encompassing computer vision, robotics, and user studies.