Home
People
Publications
Talks/Lectures
Video Highlights
Robotics Challenge
Software
Directions
Accessibility

 

Positions Available

 

Robot Locomotion Group

 

 

 

    The goal of our research is to build machines which exploit their natural dynamics to achieve extraordinary agility, efficiency, and robustness using rigorous tools from dynamical systems, control theory, and machine learning. Our current focus in on robotic manipulation, because the revolution in recent machine learning has opened a pathway in these applications to merging control theory and perception at a level that has never been considered before; ideas like "intuitive physics" and "common-sense reasoning" will meet with rigorous ideas like "model-order reduction" and "robust/adaptive control". It's going to be a great few years!

    Our previous projects have included dynamics and control for humanoid robots, dynamic walking over rough terrain, flight control for aggressive maneuvers in unmanned aerial vehicles, feedback control for fluid dynamics and soft robotics, and connections between perception and control.

    The Robot Locomotion Group is a part of Robotics @ MIT and CSAIL.

    Follow us on facebook and/or twitter.

 

Locomotion Group Paper and Multimedia News  

    Motion Planning around Obstacles with Convex Optimization
      by Tobia Marcucci and Mark Petersen and David von Wrangel and Russ Tedrake

      Trajectory optimization offers mature tools for motion planning in high-dimensional spaces under dynamic constraints. However, when facing complex configuration spaces, cluttered with obstacles, roboticists typically fall back to sampling-based planners that struggle in very high dimensions and with continuous differential constraints. Indeed, obstacles are the source of many textbook examples of problematic nonconvexities in the trajectory-optimization problem. Here we show that convex optimization can, in fact, be used to reliably plan trajectories around obstacles. Specifically, we consider planning problems with collision-avoidance constraints, as well as cost penalties and hard constraints on the shape, the duration, and the velocity of the trajectory. Combining the properties of Bézier curves with a recently-proposed framework for finding shortest paths in Graphs of Convex Sets (GCS), we formulate the planning problem as a compact mixed-integer optimization. In stark contrast with existing mixed-integer planners, the convex relaxation of our programs is very tight, and a cheap rounding of its solution is typically sufficient to design globally-optimal trajectories. This reduces the mixed-integer program back to a simple convex optimization, and automatically provides optimality bounds for the planned trajectories. We name the proposed planner GCS, after its underlying optimization framework. We demonstrate GCS in simulation on a variety of robotic platforms, including a quadrotor flying through buildings and a dual-arm manipulator (with fourteen degrees of freedom) moving in a confined space. Using numerical experiments on a seven-degree-of-freedom manipulator, we show that GCS can outperform widely-used sampling-based planners by finding higher-quality trajectories in less time.

      Preprint. Comments welcome.

    Finding and Optimizing Certified, Collision-Free Regions in Configuration Space for Robot Manipulators

      by Amice, Alexandre and Dai, Hongkai and Werner, Peter and Zhang, Annan and Tedrake, Russ

      Configuration space (C-space) has played a central role in collision-free motion planning, particularly for robot manipulators. While it is possible to check for collisions at a point using standard algorithms, to date no practical method exists for computing collision-free C-space regions with rigorous certificates due to the complexities of mapping task-space obstacles through the kinematics. In this work, we present the first to our knowledge method for generating such regions and certificates through convex optimization. Our method, called C-Iris (C-space Iterative Regional Inflation by Semidefinite programming), generates large, convex polytopes in a rational parametrization of the configuration space which are guaranteed to be collision-free. Such regions have been shown to be useful for both optimization-based and randomized motion planning. Our regions are generated by alternating between two convex optimization problems: (1) a simultaneous search for a maximal-volume ellipse inscribed in a given polytope and a certificate that the polytope is collision-free and (2) a maximal expansion of the polytope away from the ellipse which does not violate the certificate. The volume of the ellipse and size of the polytope are allowed to grow over several iterations while being collision-free by construction. Our method works in arbitrary dimensions, only makes assumptions about the convexity of the obstacles in the task space, and scales to realistic problems in manipulation. We demonstrate our algorithm's ability to fill a non-trivial amount of collision-free C-space in a 3-DOF example where the C-space can be visualized, as well as the scalability of our algorithm on a 7-DOF KUKA iiwa and a 12-DOF bimanual manipulator.

      Preprint. Comments welcome.

    Globally Convergent Policy Search over Dynamic Filters for Output Estimation

      by Jack Umenberger and Max Simchowitz and Juan C. Perdomo and Kaiqing Zhang and Russ Tedrake

      We introduce the first direct policy search algorithm which provably converges to the globally optimal dynamic filter for the classical problem of predicting the outputs of a linear dynamical system, given noisy, partial observations. Despite the ubiquity of partial observability in practice, theoretical guarantees for direct policy search algorithms, one of the backbones of modern reinforcement learning, have proven difficult to achieve. This is primarily due to the degeneracies which arise when optimizing over filters that maintain internal state. In this paper, we provide a new perspective on this challenging problem based on the notion of informativity, which intuitively requires that all components of a filter's internal state are representative of the true state of the underlying dynamical system. We show that informativity overcomes the aforementioned degeneracy. Specifically, we propose a regularizer which explicitly enforces informativity, and establish that gradient descent on this regularized objective - combined with a ``reconditioning step'' - converges to the globally optimal cost a (1/T). Our analysis relies on several new results which may be of independent interest, including a new framework for analyzing non-convex gradient descent via convex reformulation, and novel bounds on the solution to linear Lyapunov equations in terms of (our quantitative measure of) informativity.

      Under review. Comments welcome.

    Do Differentiable Simulators Give Better Policy Gradients?

      by H. J. Terry Suh and Max Simchowitz and Kaiqing Zhang and Russ Tedrake

      Differentiable simulators promise faster computation time for reinforcement learning by replacing zeroth-order gradient estimates of a stochastic objective with an estimate based on first-order gradients. However, it is yet unclear what factors decide the performance of the two estimators on complex landscapes that involve long-horizon planning and control on physical systems, despite the crucial relevance of this question for the utility of differentiable simulators. We show that characteristics of certain physical systems, such as stiffness or discontinuities, may compromise the efficacy of the first-order estimator, and analyze this phenomenon through the lens of bias and variance. We additionally propose an α-order gradient estimator, with α∈[0,1], which correctly utilizes exact gradients to combine the efficiency of first-order estimates with the robustness of zero-order methods. We demonstrate the pitfalls of traditional estimators and the advantages of the α-order estimator on some numerical examples.

      Under review. Comments welcome.

    Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning

      by Danny Driess and Jung-Su Ha and Marc Toussaint and Russ Tedrake

      This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent objects in the scene. Most manipulation planning approaches rely on analytical models and carefully chosen abstractions/state-spaces to be effective. A central question is how models can be obtained from data that are not primarily accurate in their predictions, but, more importantly, enable efficient reasoning within a planning framework, while at the same time being closely coupled to perception spaces. We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations, but also that SDF-based models are suitable for optimization-based planning. To demonstrate the versatility of our approach, we learn both kinematic and dynamic models to solve tasks that involve hanging mugs on hooks and pushing objects on a table. We can unify these quite different tasks within one framework, since SDFs are the common object representation. Video: https://youtu.be/ga8Wlkss7co

      Supplemental materials: https://www.youtube.com/watch?v=ga8Wlkss7co

      Recently presented at CoRL:2021

 

Locomotion Group News  

    January 13, 2022. PhD Defense. Congratulations to Greg Izatt for successfully defending his PhD thesis!

    August 15, 2020. Talks on Zoom. For better or worse, most research talks these days are now online. I've posted a handful of links to new talks, including Russ on Lex Fridman's AI Podcast, and at the IFRR Colloquium on the Roles of Physics-Based Models and Data-Driven Learning in Robotics.

    July 20, 2020. PhD Defense. Congratulations to Lucas Manuelli for successfully defending his PhD thesis!

    May 29, 2020. PhD Defense. Congratulations to Shen Shen for successfully defending her thesis!

    September 18, 2019. PhD Defense. Congratulations to Twan Koolen for successfully defending his thesis!

    August 19, 2019. PhD Defense. Congratulations to Pete Florence for successfully defending his thesis!

    October 15, 2018. PhD Defense. Congratulations to Robin Deits for successfully defending his thesis!

    October 3, 2018. Award. Congratulations to Pete Florence and Lucas Manuelli whose paper Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation won the Conference Best Paper Award at CoRL 2018!

    September 19, 2018. Award. Congratulations to Pete Florence and Lucas Manuelli whose paper Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation won the first ever Amazon Robotics Best Technical Paper Award (2018).

    June 18, 2018. Award. Congratulations to Ani Majumdar whose paper Funnel libraries for real-time robust feedback motion planning won the first ever International Journal of Robotics Research Paper of the Year (2017).

    April 26, 2018. Award. Congratulations to Katy Muhlrad for winning the "Audience Choice Award" at the SuperUROP Showcase for her work on "Using GelSight to Identify Objects by Touch".

    July 26, 2017. Defense. Frank Permenter successfully defended his thesis, titled "Reduction methods in semidefinite and conic optimization". Congratulations Frank!

    May 19, 2017. Award. Pete Florence was awarded the EECS Masterworks award. Congratulations Pete!

    May 19, 2017. Award. Sarah Hensley was awarded the 2017 Best SuperUROP Presentation award. Congratulations Sarah!

    May 16, 2017. PhD Defense. Michael Posa successfully defended his thesis, titled "Optimization for Control and Planning of Multi-Contact Dynamic Motion". Congratulations Michael!

    May 15, 2017. Award. Our paper describing the planning and control that we implemented on Atlas for the DARPA Robotics Challenge was recognized with the IEEE-RAS Technical Commmittee on Whole-Body Control 2016 Best Paper of the Year award.

    January 28, 2017. Video. Amara Mesnik put together a great mini-documentary on MIT's entry in the DARPA Robotics Challenge.

    May 13, 2016. PhD Defense. Ani Majumdar has successfully defended his PhD thesis. Congratulations Ani! Click on the link to watch his talk, and check the publications page to read his thesis.

    February 24, 2016. Media. NOVA's documentary on the DARPA Robotics Challenge, titled "Rise of the Robots" is online now.

    December 7, 2015. PhD Defense. Andy Barry has successfully defended his PhD thesis. Congratulations Andy! Click on the link to watch his talk.

For Group Members: