Model-based Embedded and Robotic Systems Group

MIT CSAIL

Projects

Personal Transportation System

The Personal Transportation System is a joint project between the MERS group, CSLI at Stanford and the Boeing Company. It aims at demonstrating the concept of an autonomous Personal Air Vehicle, in which the passenger interacts with the vehicle in the same manner that they interact with a taxi driver. To interact with a PAV, the passenger describe his/her goals and constraints in English. The autonomous system onboard the PAV checks the map and weather, generates a safe plan and flies the vehicle to the destination. If there is a change in the weather condition or airport availability, the system can automatically modify the original plan to achieve the passenger’s goals.

Courtesy of the Boeing Corporation

To interact with PTS, the passenger describes her goals and constraints in spoken English; for example, “PTS, I would like to go to Hanscom Field now, and we need to arrive by 4:30. Oh, and we’d like to fly over Yarmouth, if that’s possible.  The Constitution is sailing today.”  PTS checks the weather, plans a safe route, and alternative landing sites, in the event that an emergency landing is required.  Suppose that the passenger’s goals can no longer be achieved, then PT presents to the passenger a set of safe alternatives.  For example, PTS might say, “A thunder storm has appeared along the route to Hanscom.  I would like to re-route in order to avoid the thunderstorm.  This does not provide enough time to fly over Yarmouth and still arrive at Hanscom by 4:30.  Would you like to arrive later, at 5pm, or skip flying over Yarmouth?  In the future, PTS will be able to reason about user preference, and will be able to ask the user probing questions that will help her identify the best options.

PTS in Apr 2009:

PTS in Nov 2011

Personal Transportation System with Improved Plan Diagnosis

Semantic Relaxation that finds smart alternatives for your trip

Human-Robot Teamwork in Manufacturing

In current manufacturing environments, robots are often used only for highly repetitive tasks that have little-to-no variation in behavior. A common example is a welding robot in an automobile assembly line, illustrated above at left. Humans are not allowed near these large and dangerous robots since they do not have the necessary sensing to behave safety around people.

However, mass-production assembly line techniques are not always feasible (see the airplane environment, above at right). In these sorts of settings, we envision a future in which robots and humans work collaboratively as a team in factories. Robots working in these settings must:

  • Be able to communicate clearly and intelligently with humans

  • Be easy and intuitive to work with

  • Be able to respond to unexpected disturbances that may occur

We are researching various methods towards these goals, including continuous planning and execution, execution monitoring, and planning with sensing actions.

Thus far, we’ve developed (in simulation and in hardware) a collaborative manufacturing robot. Our robot is capable of manufacturing stacks of blocks and putting them on a cart, and is voice-operated. A human user can go up to the robot and ask for certain stacks of blocks to be made. If the robot is disturbed while it is working – either harmfully by messing up the partially-complete stack of blocks or beneficially by taking the robot closer to its goal – the robot will immediately notice, explain what the change is and why it’s important, and then proceed to come up with a new plan to address the disturbance (ex., picking up a fallen block).

This work is generously funded by the Boeing Company grant MIT-BA-GTA-1.

 

Autonomous Underwater Vehicles

Courtesy of the Monterrey Bay Aquarium Research Institute

The following demonstration involves operating autonomous vehicles to maximize utility in an uncertainty environment, while operating within acceptable levels of risk.  Autonomous under water vehicles (AUVs) are enabling scientists to explore previously uncharted portions of the ocean, by autonomously performing science missions of up to 20 hours in length, without the need for human intervention.  Performing these extended missions can be a very risky endeavor.  For example, due to a sudden drift in current, an AUV can collide with a seamount if it moves too close to the sea floor, while mapping a treacherous canyon.  A seasoned submarine commander is skilled at identifying navigation paths that maximize scientific value, while operating within acceptable levels of risk.  The model-based embedded and robotics group, has developed robust, chance-constraint planning algorithms that automatically navigate vehicles to achieve user specified science goals, while operating within risk levels specified by the users.  These algorithms operate by iteratively allocating the user specified risk to different steps in the mission plan, until a risk allocation is found that maximizes science utility.  A heuristic predecessor of this approach was used to navigate a vehicle to map portions of Monterrey Bay during January, 2008, and is currently being applied to the navigation of an autonomous, personal air vehicle, joint with Boeing.

Integrated AUV planning capabilities

Integrated AUV planning capabilities

Intelligent recovery from AUV scheduling failure

Intelligent recovery from AUV scheduling failure

 

Deep Space Exploration

 

Model-based autonomy has the potential to make embedded systems more robust, including automobiles, air vehicles, and spacecraft. The challenge is to make it simple enough for any programmer to use and fast enough that they are willing to use it. We are creating increasingly fast and powerful model-based executives, which are made easy to use through the metaphor of model-based programming.

We have developed a compile-time variant of the Reactive Model-based Programming Language (RMPL). RMPL simplifies embedded programming by allowing the programmer to read and set the evolution of state variables hidden within the hardware. For example, an RMPL program might state, “produce 10.3 seconds of 35% thrust”, rather than specifying the details of actuating and sensing the hardware (e.g., “signal controller 1 to open valve 12,” and “check pressure and acceleration to confirm that valve 12 is open”).

To execute RMPL programs we completed Titan 2.0, a compile-time synthesis and execution system that automatically turns RMPL programs into hardware control actions that generate and monitor the desired state evolution. Titan is safe in the sense that its programs are formally verifiable, and its generated actions avoid potentially damaging, irreversible effects. Titan is fast; it plans and diagnoses quickly by shifting most reasoning to compile time, which allows it to generate each action in roughly constant time. RMPL is opening the software engineering community to the potential of dynamic languages that reason from models.

Titan’s compiled Modes Estimation capability was selected for evaluation by the Mars Science Laboratory Technology Acceptance Board at JPL. In addition, Titan was demonstrated on Simulations of the NASA Earth Observer 1 mission, and on analogues of the NASA Mars Exploration Rover and the MIT Spheres spacecraft.  Our future research will explore probabilistic verification of model-based programs, knowledge compilation methods for achieving real-time performance, and methods for distributed execution of model-based programs.

 

Leave a Reply

Your email address will not be published. Required fields are marked *