Main

Visual Odometry For GPS-Denied Flight And Mapping Using A Kinect

Project members

MIT: Albert Huang, Abraham Bachrach, Garrett Hemann and Nicholas Roy
Univ. of Washington: Peter Henry, Mike Krainin, and Dieter Fox
Intel Labs Seattle: Xiaofeng Ren

At MIT, we have developed a real-time visual odometry system that can use a Kinect to provide fast and accurate estimates of a vehicle's 3D trajectory. This system is based on recent advances in visual odometry research, and combines a number of ideas from the state-of-the-art algorithms. It aligns successive camera frames by matching features across images, and uses the Kinect-derived depth estimates to determine the camera's motion.

We have integrated the visual odometry into our Quadrotor system, which was previously developed for controlling the vehicle with laser scan-matching. The visual odometry runs in real-time, onboard the vehicle, and its estimates have low enough delay that we are successfully able to control the quadrotor using only the Kinect and onboard IMU, enabling fully autonomous 3D flight in unknown GPS-denied environments. Notably, it does not require a motion capture system or other external sensors -- all sensing and computation required for local position control is done onboard the vehicle.

We have collaborated with Peter Henry and Mike Krainin from the Robotics and State Estimation Lab at University of Washington to use their RGBD-SLAM algorithms to perform simultaneous localization and mapping (SLAM) and build metrically accurate and visually pleasing models of the environment through which the MAV has flown.

The RGBD-SLAM algorithms process data offboard the vehicle, but send position corrections back to the vehicle to enable globally consistent position estimates.


This project was sponsored by the Office of Naval Research and the Army Research Office.