
Keep up to date with our events by adding our calendars
iCal URL http://groups.csail.mit.edu/vision/welcome/ical/events.ics
Google Calendar Add a friend's calendar: csail.vision@gmail.com
iCal URL http://groups.csail.mit.edu/vision/welcome/ical/events.ics
Google Calendar Add a friend's calendar: csail.vision@gmail.com
Past Events
[Expand All]  [Collapse All]
[+] Student Talk  DiscreteContinuous Optimization for LargeScale Structure from Motion
Date: Wednesday, May 4 2011
Time: 11:30  12:30pm
Location: 32D451
Speaker: Andrew Owens
Description:
Recent work in structure from motion has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for structure from motion based on finding a coarse initial solution using a hybrid discretecontinuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous LevenbergMarquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several largescale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.
Time: 11:30  12:30pm
Location: 32D451
Speaker: Andrew Owens
Description:
Recent work in structure from motion has successfully built 3D models from large unstructured collections of images downloaded from the Internet. Most approaches use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the number of images grows, and can drift or fall into bad local minima. We present an alternative formulation for structure from motion based on finding a coarse initial solution using a hybrid discretecontinuous optimization, and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous LevenbergMarquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and the points, including noisy geotags and vanishing point estimates. We test our method on several largescale photo collections, including one with measured camera positions, and show that it can produce models that are similar to or better than those produced with incremental bundle adjustment, but more robustly and in a fraction of the time.
[+] Student Talk  Efficient MCMC Sampling with Implicit Shape Representations
Date: Wednesday, Apr 20 2011
Time: 3:30  4:30pm
Location: 32G449 (Kiva)
Speaker: Jason Chang
Description:
We present a method for sampling from the posterior distribution of implicitly defined segmentations conditioned on the observed image. Segmentation is often formulated as an energy minimization or statistical inference problem in which either the optimal or most probable configuration is the goal. Exponentiating the negative energy functional provides a Bayesian interpretation in which the solutions are equivalent. Sampling methods enable evaluation of distribution properties that characterize the solution space via the computation of marginal event probabilities. We develop a MetropolisHastings sampling algorithm over levelsets which improves upon previous methods by allowing for topological changes while simultaneously decreasing computational times by orders of magnitude. An Mary extension to the method is provided.
Time: 3:30  4:30pm
Location: 32G449 (Kiva)
Speaker: Jason Chang
Description:
We present a method for sampling from the posterior distribution of implicitly defined segmentations conditioned on the observed image. Segmentation is often formulated as an energy minimization or statistical inference problem in which either the optimal or most probable configuration is the goal. Exponentiating the negative energy functional provides a Bayesian interpretation in which the solutions are equivalent. Sampling methods enable evaluation of distribution properties that characterize the solution space via the computation of marginal event probabilities. We develop a MetropolisHastings sampling algorithm over levelsets which improves upon previous methods by allowing for topological changes while simultaneously decreasing computational times by orders of magnitude. An Mary extension to the method is provided.
[+] Student Talk  Learning matrix decomposition structures
Date: Wednesday, Apr 13 2011
Time: 3:30  4:30pm
Location: 32D463 (Star)
Speaker: Roger Grosse
Description:
Many widely used models in unsupervised learning can be viewed as matrix decompositions, where the input matrix is expressed as sums and products of matrices drawn from a few simple priors. We present a unifying framework for matrix decompositions in terms of a contextfree grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 1000 structures using a small toolbox of reusable algorithms. Using bestfirst search over our grammar, we can automatically choose the decomposition structure from raw data by evaluating only a tiny fraction of all models. This gives a recipe for selecting model structure in unsupervised learning situations. The proposed method almost always finds the right structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns plausible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.
Time: 3:30  4:30pm
Location: 32D463 (Star)
Speaker: Roger Grosse
Description:
Many widely used models in unsupervised learning can be viewed as matrix decompositions, where the input matrix is expressed as sums and products of matrices drawn from a few simple priors. We present a unifying framework for matrix decompositions in terms of a contextfree grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 1000 structures using a small toolbox of reusable algorithms. Using bestfirst search over our grammar, we can automatically choose the decomposition structure from raw data by evaluating only a tiny fraction of all models. This gives a recipe for selecting model structure in unsupervised learning situations. The proposed method almost always finds the right structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns plausible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.
[+] Student Talk  Latent Binary Activations for fMRI DetectionEstimation
Date: Wednesday, Apr 6 2011
Time: 3:30  4:30pm
Location: 32G882
Speaker: Ramesh Sridharan
Description:
Detection of brain activity and selectivity using functional magnetic resonance imaging (fMRI) provides unique insight into the underlying functional properties of the brain. We propose and demonstrate a generative model that jointly explains activation and temporal dynamics in fMRI experiments. In particular, our model assumes binary activations for each voxelstimulus pair as well as a continuous, stimulusindependent activation magnitude at each voxel that is driven by factors unrelated to neural activity. We derive an algorithm for inferring activation patterns, activation magnitude, and the corresponding temporal dynamics from fMRI data. We present results on synthetic and actual fMRI data, demonstrating that our method provides accurate estimates of patterns of selectivity.
Time: 3:30  4:30pm
Location: 32G882
Speaker: Ramesh Sridharan
Description:
Detection of brain activity and selectivity using functional magnetic resonance imaging (fMRI) provides unique insight into the underlying functional properties of the brain. We propose and demonstrate a generative model that jointly explains activation and temporal dynamics in fMRI experiments. In particular, our model assumes binary activations for each voxelstimulus pair as well as a continuous, stimulusindependent activation magnitude at each voxel that is driven by factors unrelated to neural activity. We derive an algorithm for inferring activation patterns, activation magnitude, and the corresponding temporal dynamics from fMRI data. We present results on synthetic and actual fMRI data, demonstrating that our method provides accurate estimates of patterns of selectivity.
[+] Student Talk  Construction of Dependent Dirichlet Processes based on Poisson Processes
Date: Wednesday, Mar 30 2011
Time: 3:30  4:30pm
Location: 32D463 (Star)
Speaker: Dahua Lin
Description:
We present a method for constructing dependent Dirichlet processes. The new approach exploits the intrinsic relationship between Dirichlet and Poisson processes in order to create a Markov chain of Dirichlet processes suitable for use as a prior over evolving mixture models. The method allows for the creation, removal, and location variation of component models over time while maintaining the property that the random measures are marginally DP distributed. Additionally, we derive a Gibbs sampling algorithm for model inference and test it on both synthetic and real data. Empirical results demonstrate that the approach is effective in estimating dynamically varying mixture models.
Time: 3:30  4:30pm
Location: 32D463 (Star)
Speaker: Dahua Lin
Description:
We present a method for constructing dependent Dirichlet processes. The new approach exploits the intrinsic relationship between Dirichlet and Poisson processes in order to create a Markov chain of Dirichlet processes suitable for use as a prior over evolving mixture models. The method allows for the creation, removal, and location variation of component models over time while maintaining the property that the random measures are marginally DP distributed. Additionally, we derive a Gibbs sampling algorithm for model inference and test it on both synthetic and real data. Empirical results demonstrate that the approach is effective in estimating dynamically varying mixture models.