Video Highlights
Robotics Challenge


Positions Available


Robot Locomotion Group




    The goal of our research is to build machines which exploit their natural dynamics to achieve extraordinary agility and efficiency. In an age where "big data" is all the rage, we still have relatively limited data from robots in these regimes, and instead rely mostly on existing models (e.g. from Lagrangian mechanics) and model-based optimization. We believe that deep connections are possible -- enabling very efficient optimization by exploiting structure in the governing equations -- and are working hard on both optimization algorithms and control applications. Our previous projects have included dynamics and control for humanoid robots, dynamic walking over rough terrain, flight control for aggressive maneuvers in unmanned aerial vehicles, feedback control for fluid dynamics and soft robotics, and connections between perception and control. These days the lab is primarily focused on robot manipulation, with continued emphasis on feedback control (which is so far largely absent in manipulation) and the connections between perception and control.

    The Robot Locomotion Group is a part of Robotics @ MIT and CSAIL.

    Follow us on facebook and/or twitter.


Locomotion Group Paper and Multimedia News  

    Mixed-Integer Formulations for Optimal Control of Piecewise-Affine Systems
      by Tobia Marcucci and Russ Tedrake

      In this paper we study how to formulate the optimal control problem for a piecewise-affine dynamical system as a mixed-integer program. Problems of this form arise typically in hybrid Model Predictive Control (MPC), where at every time step an open-loop optimal control sequence is computed via numerical optimization and applied to the system in a moving horizon fashion. Not surprisingly, the efficiency in the formulation of the underlying mathematical program has a crucial influence on computation times, and hence on the applicability of hybrid MPC to high-dimensional systems. We leverage on modern concepts and results from the fields of mixed-integer and disjunctive programming to conduct a comprehensive analysis of this formulation problem: among the outcomes enabled by this novel perspective is the derivation of multiple highly-efficient formulations of the control problem, each of which represents a different tradeoff between the two most important features of a mixed-integer program, the size and the strength. First in theory, then through a numerical example, we show how all the proposed methods outperform the traditional approach employed in MPC, enabling the solution of larger-scale problems.

      Under review. Comments welcome.

    Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids

      by Yunzhu Li and Jiajun Wu and Russ Tedrake and Joshua B. Tenenbaum and Antonio Torralba

      Real-life control tasks involve matter of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors. This poses challenges to traditional rigid-body physics engines. Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real world physics, especially in the long term. In this paper, we propose to learn a particle-based simulator for complex control tasks. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. Using the learned simulator, robots have achieved success in complex manipulation tasks, such as manipulating fluids and deformable foam. The effectiveness of our method has also been demonstrated in real world. Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations.

      Supplemental materials: http://dpi.csail.mit.edu/ , https://www.youtube.com/watch?v=nQipmVDuytQ , https://arxiv.org/abs/1810.01566

      Under review. Comments welcome.

    {LVIS}: Learning from Value Function Intervals for Contact-Aware Robot Controllers

      by Robin Deits and Twan Koolen and Russ Tedrake

      Guided policy search is a popular approach for training controllers for high-dimensional systems, but it has a number of pitfalls. Non-convex trajectory optimization has local minima, and non-uniqueness in the optimal policy itself can mean that independently-optimized samples do not describe a coherent policy from which to train. We introduce LVIS, which circumvents the issue of local minima through global mixed-integer optimization and the issue of non-uniqueness through learning the optimal value function (or cost-to-go) rather than the optimal policy. To avoid the expense of solving the mixed-integer programs to full global optimality, we instead solve them only partially, extracting intervals containing the true cost-togo from early termination of the branch-and-bound algorithm. These interval samples are used to weakly supervise the training of a neural net which approximates the true cost-to-go. Online, we use that learned cost-to-go as the terminal cost of a one-step model-predictive controller, which we solve via a small mixed-integer optimization. We demonstrate the LVIS approach on a cart-pole system with walls and a planar humanoid robot model and show that it can be applied to a fundamentally hard problem in feedback control -- control through contact.

      Supplemental materials: https://arxiv.org/abs/1809.05802

      Under review. Comments welcome.

    Sampling-based Polytopic Trees for Approximate Optimal Control of Piecewise Affine Systems

      by Sadra Sadraddini and Russ Tedrake

      Piecewise affine (PWA) systems are widely used to model highly nonlinear behaviors such as contact dynamics in robot locomotion and manipulation. Existing control techniques for PWA systems have computational drawbacks, both in offline design and online implementation. In this paper, we introduce a method to obtain feedback control policies and a corresponding set of admissible initial conditions for discrete-time PWA systems such that all the closed-loop trajectories reach a goal polytope, while a cost function is optimized. The idea is conceptually similar to LQR-trees Tedrake et al., 2010, which consists of 3 steps: (1) open-loop trajectory optimization, (2) feedback control for computation of funnels of states around trajectories, and (3) repeating (1) and (2) in a way that the funnels are grown backward from the goal in a tree fashion and fill the state-space as much as possible. We show PWA dynamics can be exploited to combine step (1) and (2) into a single step that is tackled using mixed-integer convex programming, which makes the method suitable for dealing with hard constraints. Illustrative examples on contact-based dynamics are presented.

      Supplemental materials: https://youtu.be/gGH0EuIzkgY , https://arxiv.org/abs/1809.09716

      Under review. Comments welcome.

    Propagation Networks for Model-Based Control Under Partial Observation

      by Yunzhu Li and Jiajun Wu and Jun-Yan Zhu and Joshua B. Tenenbaum and Antonio Torralba and Russ Tedrake

      There has been an increasing interest in learning dynamics simulators for model-based control. Compared with off-the-shelf physics engines, a learnable simulator can quickly adapt to unseen objects, scenes, and tasks. However, existing models like interaction networks only work for fully observable systems; they also only consider pairwise interactions within a single time step, both restricting their use in practical systems. We introduce Propagation Networks (PropNet), a differentiable, learnable dynamics model that handles partially observable scenarios and enables instantaneous propagation of signals beyond pairwise interactions. With these innovations, our propagation networks not only outperform current learnable physics engines in forward simulation, but also achieves superior performance on various control tasks. Compared with existing deep reinforcement learning algorithms, model-based control with propagation networks is more accurate, efficient, and generalizable to novel, partially observable scenes and tasks.

      Supplemental materials: http://propnet.csail.mit.edu/ , https://arxiv.org/abs/1809.11169 , https://www.youtube.com/watch?v=vB8fg-yQs-I

      Under review. Comments welcome.


Locomotion Group News  

    October 15, 2018. PhD Defense. Congratulations to Robin Deits for successfully defending his thesis!

    October 3, 2018. Award. Congratulations to Pete Florence and Lucas Manuelli whose paper Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation won the Conference Best Paper Award at CoRL 2018!

    September 19, 2018. Award. Congratulations to Pete Florence and Lucas Manuelli whose paper Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation won the first ever Amazon Robotics Best Technical Paper Award (2018).

    June 18, 2018. Award. Congratulations to Ani Majumdar whose paper Funnel libraries for real-time robust feedback motion planning won the first ever International Journal of Robotics Research Paper of the Year (2017).

    April 26, 2018. Award. Congratulations to Katy Muhlrad for winning the "Audience Choice Award" at the SuperUROP Showcase for her work on "Using GelSight to Identify Objects by Touch".

    July 26, 2017. Defense. Frank Permenter successfully defended his thesis, titled "Reduction methods in semidefinite and conic optimization". Congratulations Frank!

    May 19, 2017. Award. Pete Florence was awarded the EECS Masterworks award. Congratulations Pete!

    May 19, 2017. Award. Sarah Hensley was awarded the 2017 Best SuperUROP Presentation award. Congratulations Sarah!

    May 16, 2017. PhD Defense. Michael Posa successfully defended his thesis, titled "Optimization for Control and Planning of Multi-Contact Dynamic Motion". Congratulations Michael!

    May 15, 2017. Award. Our paper describing the planning and control that we implemented on Atlas for the DARPA Robotics Challenge was recognized with the IEEE-RAS Technical Commmittee on Whole-Body Control 2016 Best Paper of the Year award.

    January 28, 2017. Video. Amara Mesnik put together a great mini-documentary on MIT's entry in the DARPA Robotics Challenge.

    May 13, 2016. PhD Defense. Ani Majumdar has successfully defended his PhD thesis. Congratulations Ani! Click on the link to watch his talk, and check the publications page to read his thesis.

    February 24, 2016. Media. NOVA's documentary on the DARPA Robotics Challenge, titled "Rise of the Robots" is online now.

    December 7, 2015. PhD Defense. Andy Barry has successfully defended his PhD thesis. Congratulations Andy! Click on the link to watch his talk.

    November 18, 2015. In the news. NASA's R5 humanoid robot is coming to MIT. We're very excited to have the opportunity to do research on this amazing platform.

    November 5, 2015. Award. Our DRC Team's continuous walking with stereo fusion paper just won the Best Paper Award (Oral) at Humanoids 2015. Congratulations all!

    November 5, 2015. In the news. Andy's video of high-speed UAV obstacle avoidance (using only onboard processing) got some great coverage this week. This article by the IEEE Spectrum was particularly nice and insightful.

    October 26, 2015. PhD Defense. Andres Valenzuela just successfully defended his PhD thesis. Congratulations Andres!

    May 29, 2015. News. We're heading off to the DARPA Robotics Challenge. We've been posting some fun videos to our YouTube site (linked here). Wish us luck!

    May 26, 2015. News. Benoit Landry has submitted his Masters Thesis on Aggressive Quadrotor Flight in Dense Clutter. Be sure to check out his cool video.

For Group Members: