The complete schedule for the spring semester is online at
. All talks may be viewed on our
Webviewing at UC Davis: 1003 Kemper Hall
Webviewing at UC Merced: SE1 100
Webviewing at UC Santa Cruz: SOE E2 Building, Room 506
Abstract:
Automated 3D modeling of building interiors is useful in applications such as virtual reality and entertainment. In this talk, we develop an architecture and associated algorithms for fast, automatic, photo-realistic 3D models of building interiors. The central challenge of such a problem is to localize the acquisition device while it is in motion, rather than collecting the data in a stop and go fashion. In the past, such acquisition devices have been placed on robots with wheels or human operated pushcarts, which would limit their use to planar environments. Our goal is to address the more difficult problem of localization and 3D modeling in more complex non-planar environments such as staircases, or caves. Thus, we propose a human operated backpack system made of a suite of sensors such as laser scanners, cameras, inertial measurement units (IMU)s which are used to both localize the backpack, and build the 3D geometry and texture of the scene. The two main challenges to localizing a human operated backpack system in indoor environments are (a) lack of GPS, and (b) having to recover six degrees of freedom (DoF) pose information, rather than 3 DoF namely, x,y, and yaw, typically used in wheeled systems on planar floors. As it turns out, the small pitch, roll and z variations for typical human gait cannot be ignored for the full 6 dimensional pose recovery made of x,y,z, pitch, roll and yaw. We propose a number of scan matching and visual odometry based localization algorithms and compare their performance using a high end IMU sensor which serves as the ground truth. We also propose a number of 3D model generation approaches, and show examples of resulting models for multiple floors of the electrical engineering building at U.C. Berkeley.