William Freeman; Massachusetts Institute of Technology; PI, NSF award ID 1111415
Wojciech Matusik; Massachusetts Institute of Technology; PI, NSF award ID 1111415
Hanspeter Pfister; Harvard University; co-PI, NSF award ID 1110955
Robert Pless; Washington University in St. Louis; co-PI, NSF award ID 1111398
Noah Snavely; Cornell University; co-PI, NSF award ID 1111534
Understanding time-varying processes and phenomena is fundamental to science and engineering. Due to tremendous progress in digital photography, images and videos (including images from webcams, timelapse photography captured by scientists, surveillance videos, and Internet photo collections) are becoming an important source of information about our dynamic world. However, techniques for automated understanding and visualization of time-varying processes from images or videos are scarce and underdeveloped, requiring fundamental new models and algorithms for representing changes over time.
The goal of this project is to create complete end-to-end systems that enable modeling, analysis, and visualization of time-varying processes based on image data. These will include both general systems applicable to arbitrary time-varying image sequences and more constrained systems targeted for specific types of data, such as outdoor multi-view time-lapse photography. These models and algorithms will form the basis for a new set of tools that can help answer important questions about how our environment is changing, how our cities are evolving, and what significant events are happening around the world.
Intellectual Merit: Analyzing images over time poses fundamental new technical challenges. This project focuses on developing and demonstrating end-to-end systems consisting of (1) novel representations necessary to model time-varying image datasets; (2) algorithms for estimating long-range temporal correspondence in image datasets; (3) algorithms for decomposing image datasets into intuitive primitives such as shading, illumination, reflectance, and motion; (4) analysis tools for deriving higher level information from the decomposed representations (e.g., trends, repeated patterns, and unusual events); and (5) tools for visualization of the high-level information and methods for re-synthesis of image data.
The research goals of this project are broad and multi-disciplinary. Therefore, the proposal brings together PIs in the fields of computer vision, machine learning, computer graphics, and visualization. The PIs have worked together on multiple occasions over the last ten years, and their research has been recognized by IEEE CVPR, NIPS, ACM SIGGRAPH, and IEEE Visualization.
Broader Impacts: This research will create models and representations for time-varying image data, an important step towards the computer vision systems of the future. A key goal is to develop an enabling technology for nearly automated analysis of this data type, building systems that will provide a basic toolbox for many scientists to analyze time-varying images. Hence, this work has the potential for impact in ecology, astronomy, urban planning, health, and many other areas where images are generated over time. Further, this work can enable new commercial applications in consumer photography and mapping..
The results of this research will be broadly disseminated by making source code and datasets publicly available, offering tutorials and organizing workshops at significant conferences, and producing publications in refereed conferences and journals. The proposed program will also impact education at both the undergraduate and graduate levels. In addition to supporting the traditional research roles of graduate students, undergraduates will be given the opportunity to gain hands-on research experience in the participating laboratories. Further, the results of this research will be incorporated into course curricula to enhance the education of both senior undergraduate and graduate students.
|This material is based upon work supported by the National Science Foundation under Grant Nos. 1111415, 1110955, 1111398, and 1111534. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.|