Model-based Embedded and Robotic Systems Group


Adaptive Sampling

Many high-level tasks require the collection of data to inform future decisions. For example, confronting a wildfire requires knowledge such as the fire’s spread, the prevailing winds, and the surrounding geography. A more abstract case is working with a new teammate, which typically involves a period of getting to know the teammate’s work habits and preferences. If a human were placed in these situations, one would intuitively seek out the information that could boost one’s confidence in future actions. This is the kind of intelligence we aim to contribute to cognitive robotics.

In collaboration with Lincoln Labs, we are investigating adaptive sampling techniques for efficient and purposeful data collection. First, our techniques reason over models of the working environment and the autonomous system’s own dynamics. This allows us to simulate and assess the consequences of potential actions. The modeling capability is then coupled with reasoning over the planning and execution strategy to create goal-driven sensing behavior. At the root of our adaptive sampling approach is the assessment of uncertainty and uncertainty reduction by planning actions.

Our work is specially suited for gathering information in situations where it would be difficult or laborious for humans to obtain manually, or where humans are not available. Firefighting is an ideal application, as aerial drones could study the edges of fires faster, safer, and cheaper than flying a helicopter. Related applications include autonomous disaster relief via aerial and ground scouts or monitoring ocean algal blooms via underwater vehicles. In human-robot interaction, the robot needs to replace and act like a human teammate. Our adaptive sampling in this arena builds off of previous work in preference elicitation to infer what actions will minimize surprises for the human operator.

To date, we use information-state MDPs to model one’s knowledge about taking actions in the uncertain environment. We have developed approximate value-iteration algorithms to handle the continuous state-space and demonstrated them in simulation.