APP logo

Activity Perception Project


Projects

Vision Research Group
CSAIL
MIT


Introduction

People

Projects

Publications

Links

Internal

(CSAIL access only)

Current Research



Multi-camera Correspondence

Multi-camera surveilllance, with a focus on learning the topology (i.e., connectivity) of the camera network, and doing cross-camera correspondence.




Object Classification from Silhouettes

Learning to classify objects into semantic categories (such as vehicles and pedestrians) from foreground silhouettes. Supervised and unsupervised learning methods are being explored.



Learning Models of Activities

Learning semantic scene models from long term observations of object activities in the scene.


Vehicle Classification

Studying current techniques for generic class recognition.



Unsupervised Activity Perception in Crowded and Busy Scenes by Hierarchical Bayesian Model

Model activities and interactions in crowded and busy scenes in an unsupervised way without tracking.



Event Detection

Detect loitering events, and several events involving interactions between actors and their luggage.



Discovering Objects from Image Collections and Video Sequences

Discover objects without supervision using the Spatial Dirichlet Allocation Model.



Background Subtraction

A generalization of the mixture of Gaussian model that can handle dynamic textures such as trees waiving in the wind and rippling water.


Past Projects



Human ID


The HID project develops algorithms for recognition of people by their gait and integrating gait with other information, such as face to identify people.

Image Retrieval

We are exploring methods for measuring visual similarity between images and developing efficient algorithms for clustering and querying of image databases.


Visual Surveillance and Activity Modeling


Video Surveillance and Monitoring (VSAM) is a project to automatically understand activity in the world from long and potentially numerous video streams.



Object Tracking


Detecting and tracking moving objects in real-time, in a range of indoor and outdoor scenes. Our approach is based on adaptive background subtraction.



Attention-Based Video Analysis

Given a new moving scene and the opportunity to observe it for a period of time, the goal is to learn an attention model suitable for recognizing unusual activity in the scene.





Last updated February 5, 2008.  Questions/comments? Contact app at csail dot mit dot edu