Motion-Invariant Photography

To be presented at SIGGRAPH 2008

Anat Levin   Peter Sand   Taeg Sang Cho   Fredo Durand   William T. Freeman

Computer Science and Artificial Intelligence Lab (CSAIL)

Massachusetts Institute of Technology

[Conventional camera] [Our camera] [Our Camera after deconvolution]

Abstract

Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal) we show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, we show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is invariant to object velocity. Thus, a single deconvolution kernel can be used to remove blur and create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge of the object speeds. Apart from motion invariance, we prove that the derived parabolic motion preserves image frequency content nearly optimally. That is, while static objects are degraded relative to their image from a static camera, a reliable reconstruction of all moving objects within a given velocities range is made possible. We have built a prototype camera and present successful deblurring results over a wide variety of human motions.

Download

Related work

Acknowledgments

This work was supported by an NSF CAREER award 0447561, a grant from Royal Dutch/Shell Group, NGA NEGI-1582-04-0004 and the Office of Naval Research MURI Grant N00014-06-1-0734. Fredo Durand acknowledges a Microsoft Research New Faculty Fellowship and a Sloan Fellowship.

Last update: July 25 2008