Main

Overview

The research advising statement for the RRG can be read here.
The research impact and group values statement for the RRG can be read here.

Current Projects

Past Projects

Current Projects

Uncertainty-Aware Navigation in Structured, Unknown Environments

We would like for robots to navigate efficiently in structured, unknown environments which have large state spaces for planning, either due to their lengthscale or the presence of uncertainty in the environment. We recognize that environmental structure, like doors, hallways, and exit signs in office buildings, and roads, forests, bodies of water, and bridges in outdoor environments can provide cues which better enable agents to infer high-quality navigation strategies.

Our research develops uncertainty-aware models and planners which use implicit and explicit environmental structure to improve planning efficiency and quality. We have used geometric and explicit object-level information to learned sampling distributions for sampling-based motion planners which enable efficient planning at longer horizons in partially known environments. We have proposed a hierarchical planning representation for multi-query robot navigation which uses previous planning experience to coarsely capture implicit environmental structure and prune regions of the environment which are unlikely to lead to low cost solutions for hierarchical, multi-query robot navigation. In our current work, we are developing collaborative multiagent planning algorithms which explicitly consider the team costs and benefits of taking sensing actions in stochastic environments when we have access to stale environmental data.

Research Themes

Semantic planning, hierarchical planning, planning under uncertainty, multiagent planning under uncertainty

Publications

  • M. Stadler, K. Liu, N. Roy. "Online High-Level Model Estimation for Efficient Hierarchical Robot Navigation." IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. PDF
  • K. Liu*, M. Stadler*, and N. Roy. "Learned Sampling Distributions for Efficient Planning in Hybrid Geometric and Object-Level Representations." International Conference on Robotics and Automation (ICRA), 2020. PDF Video

Robotic Expeditionary Science


A representative image of robotic expeditionary science

In the expeditionary sciences, spatiotemporally varying environments --- hydrothermal plumes, algal blooms, lava flows, or animal migrations --- are ubiquitous. Mobile robots are uniquely well-suited to study and sample from these dynamic, mesoscale natural environments, however to collect elucidating observations of unknown, partially-observed spatiotemporal distributions for scientific inquiry requires decision-making under uncertainty and sometimes severe operational constraints. For instance, some of the most advanced autonomous underwater vehicles (AUVs) used in oceanographic research operate with open-loop controllers and have severely limited acoustic communication with external actors (i.e., a ship).

We formalize expeditionary science as a sequential decision-making problem, modeled using the language of partially-observable Markov decision processes (POMDPs). Solving the expeditionary science POMDP under real-world constraints requires efficient probabilistic modeling and decision-making in problems with complex dynamics and observational models. Previous work in informative path planning, adaptive sampling, and experimental design have shown compelling results, largely in static environments, using data-driven models and information-based rewards. However, these methodologies do not trivially extend to expeditionary science in spatiotemporal environments: they generally do not make use of scientific knowledge such as equations of state dynamics, they focus on information gathering as opposed to scientific task execution, and they make use of decision-making approaches that scale poorly to large, continuous problems with long planning horizons and real-time operational constraints.

In our work, we tackle these and other challenges related to probabilistic modeling and decision-making in expeditionary science. In particular, we look for ways to exploit scientific intuition to overcome challenges in sample efficient dynamics learning under partial observability and leverage notions of uncertainty in dynamic planning windows. We ground our work in specific scientific contexts, such as deep sea hydrothermal vent charting.

Research Themes

Informative path planning, planning under uncertainty, adaptive sampling, belief representations, physics-informed learning, field robotics

Publications

  • G. Flaspohler*, V. Preston*, A.P.M. Michel, Y. Girdhar, and N. Roy. "Information-Guided Robotic Maximum Seek-and-Sample in Partially Observable Continuous Environments" Robotics and Automation Letters, 2019.PDF
  • V. Preston, "Adaptive Sampling of Transient Environmental Phenomena with Autonomous Mobile Platforms" Massachusetts Institute of Technology, Master's Thesis, 2019. PDF
  • G. Flaspohler, N. Roy, and J.W. Fisher III. "Belief-dependent macro-action discovery in POMDPs using the value of information" Advances in Neural Information Processing Systems, 2020. PDF
  • V. Preston*, G. Flaspohler*, A.P.M. Michel, J.W. Fisher III, N. Roy. "Robotic Planning under Uncertainty in Spatiotemporal Environments for Expeditionary Science" Conference on Reinforcement Learning and Decision Making, 2022.

Navigating Outdoor Environments


A Clearpath Robotics Jackal navigating outdoors

Natural outdoor environments present unique challenges to robot navigation and control. Safely traversing forests, deserts, or other natural scenes requires approaches that can learn or encode properties of these environments not reasoned about in traditional robotic planners. For example, purely geometric planners consider all obstacles identically, and will attempt to avoid grass or small bushes that are not in reality obstacles for the robot, leading to suboptimal plans and unnecessarily longer trajectories.

New advances in machine learning that improve on the ability of robots to extract the semantics of their surroundings hold promise for context-informed outdoor robot navigation. In our current research we are looking at ways to exploit the underlying semantic correlations in the environment, in order to allow global motion planners to more efficiently navigate previously unobserved areas. To ensure safety, we are also exploring ways to represent the semantic structure of the robot's surroundings within the context of online vector field-based local motion planners, that can guarantee obstacle avoidance and convergence to a local goal by exploiting the topological properties of the environment.

Research Themes

Semantic planning, outdoor navigation, field robotics


Robot Vision and Perception


Several depth estimation algorithms at work

In navigating and manipulating our world, humans make use of several perceptual systems. Our sense of proprioception indicates our orientation and stability relative the Earth's pull. Our sense of touch tells us how well we can trust our footing, and helps us find and interact with actionable targets with our hands. Our sense of smell, even, is occasionally responsible for telling us that we are heading where we want to go.

The majority of humans rely quite heavily on the sense of sight. We use it to detect and recognize objects and other organisms, estimate their motion and shape, tell the time of day, judge our completion of a task, and so on. In the RRG, we seek to replicate this myriad of visual abilities in autonomous robots navigating, manipulating, and completing tasks in our world.

The RRG is currently focused on improving low-level vision using higher-level contextual and cognitive cues. In pursuit of this goal, we are developing algorithms for estimating depth/shape from natural RGB images, and fusing robot vision algorithms using estimates of uncertainty. We are also exploring methods for conveying scene and task semantics from higher-level cognition blocks to lower-level perceptual blocks, drawing inspiration from reciprocal and lateral information flow in the primate brain.

Research Themes

Semantically-informed robot vision, Uncertainty-aware sensor fusion, Computer vision and machine learning, Human vision and primate neuroscience


Exploration and Curiosity for Robotic Manipulation

In many environments such as households or office buildings, robots frequently have to interact with unknown objects or perform novel tasks. However, models that support these interactions often require large amounts of carefully curated training data from known object models and struggle to adapt to new object instances. In this line of work, we explore how robots can efficiently adapt to novel scenarios by leveraging prior experience and self-supervision.

Our research has focussed on developing adaptive versions of learned modules which allow a robot to successfully interact with previously unseen objects after only a handful of attempts. One difficulty of interacting with novel objects is the diversity of visual features. For example, even though doors share common structure, their visual features vary significantly between instances. We develop a learning paradigm that combines the adaptivity of Gaussian Processes with learned features from CNNs to allow a robot to quickly integrate prior experience with online data [1]. Another difficulty with unknown object interaction arises from non-visual properties such as mass and friction. We develop object-centric models which allow a robot to separately reason about object-specific properties and shared global dynamics [3]. This factorization allows a robot to quickly adapt to new objects by only reasoning about relevant variation. Finally, all this work is done within a self-supervision paradigm where the robot can explicitly reason about information gain [2].

Research Themes

Planning under uncertainty, manipulation, online adaptation, active learning

Publications

  • [1] C. Moses*, M. Noseworthy*, L. Kaelbling, T. Lozano-Pérez and N. Roy. "Visual Prediction of Priors for Articulated Object Interaction." International Conference on Robotics and Automation (ICRA), 2020.
  • [2] M. Noseworthy*, C. Moses*, I. Brand*, S. Castro, L. Kaelbling, T. Lozano-Pérez and N. Roy. "Active Learning of Abstract Plan Feasibility." Robotics Science and Systems (RSS), 2021.
  • [3] I. Brand*, M. Noseworthy*, S. Castro, and N. Roy. “Object-Factored Models with Partially Observable State.” Object-Factored Models with Partially Observable State (RLDM), 2022.

Learning to Guide Planning under Uncertainty


Overview of PO-TLP Approach

We aim to enable robots to plan efficiently and optimally across a number of different domains in the presence of uncertainty. In particular, we are interested in problems where the environment is revealed as the robot acts within it, and using learning to better plan in those problems.

This research is focused on learning models to predict the outcomes and costs of executing high-level actions. Initially, we focused primarily on the problem of goal-directed navigation in partially revealed environments, considering actions which lead the robot to enter previously unknown space. By predicting if the agent's goal can be reached via a particular subgoal (as well as the associated cost), we were able to incorporate prior experience into our planner, and improve the decision making process. Expanding on this, we considered learning to model the outcomes of actions in temporally extended tasks, in particular those expressed in Linear Temporal Logic. Once again, we were able to plan using past experience to outperform an uninformed baseline. To support these planners, this line of work has also considered how best to model an environment as it is being explored. To that end, we designed and implemented a mapping representation, built directly from RGB input, that builds a topological representation of an environment which is well suited for planning with high-level, exploratory actions.

Currently, we are investigating applying these methods to the domain of task and motion planning.

Research Themes

Planning under Uncertainty, Deep Learning, Planning with Learned Models, Hierarchical Planning, Linear Temporal Logic, Task and Motion Planning

Publications

  • Christopher Bradley, Adam Pacheck, Gregory J. Stein, Sebastian Castro, Hadas Kress-Gazit, and Nicholas Roy. "Learning and planning for temporally extended tasks in unknown environments." 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. PDF Video
  • Gregory J. Stein*, Christopher Bradley*, Victoria Preston*, Nicholas Roy. "Enabling topological planning with monocular vision." 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020. PDF Video
  • Gregory J. Stein*, Christopher Bradley*, and Nicholas Roy. "Learning over subgoals for efficient navigation of structured, unknown environments." Conference on robot learning. PMLR, 2018. PDF Video

Past Projects

Micro-Air Vehicle Navigation and Control

Our group has developed some of the first fully autonomous micro-aerial vehicle (MAV) systems capable of self-directed exploration in both GPS-Denied and communications-denied environments. The absence of GPS and high-bandwidth communications links limits the sensing and processing available to the MAV to only that which can be carried onboard the vehicle.

An example (see here for video) of our past work includes a fixed wing vehicle capable of localizing itself in a known map and flying autonomously using only a 2D laser scanner and inertial measurement unit (IMU), with all processing performed onboard the vehicle.

Our recent work (STAR, MLM) has focused on using small, inexpensive cameras to estimate the pose of the MAV and the geometric structure of the environment to enable high-speed autonomous flight.

References

  • K. Ok, W. N. Greene, and N. Roy. "Simultaneous Tracking and Rendering: Real-time Monocular Localization for MAVs" Proceedings of the International Conference on Robotics and Automation (ICRA), Stockholm, 2016.
    [PDF] [Video]
  • W. N Greene, K. Ok, P. Lommel, and N. Roy. "Multi-Level Mapping: Real-time Dense Monocular SLAM." Proceedings of the International Conference on Robotics and Automation (ICRA), Stockholm, 2016.
    [PDF] [Video]
  • K. Ok, D. Gamage, T. Drummond, F. Dellaert, and N. Roy. "Monocular Image Space Tracking on a Computationally Limited MAV." Proceedings of the International Conference on Robotics and Automation (ICRA), Seattle, 2015.
    [PDF]
  • A. Bry, C. Richter, A. Bachrach and N. Roy. "Aggressive Flight of Fixed-Wing and Quadrotor Aircraft in Dense Indoor Environments". International Journal of Robotics Research, 37(7):969-1002, June 2015.
    [PDF] [Bibtex Entry]
  • Charles Richter, Adam Bry, Nicholas Roy. (2013). "Polynomial Trajectory Planning for Aggressive Quadrotor Flight in Dense Indoor Environments." International Symposium of Robotics Research (ISRR), Singapore, 2013.
    [PDF] [BiBTeX Entry]
  • A. Bachrach, S. Prentice, R. He, P. Henry, A. S. Huang, M. Krainin, D. Maturana, D. Fox and N. Roy. "Estimation, Planning and Mapping for Autonomous Flight Using an RGB-D Camera in GPS-denied Environments". International Journal of Robotics Research, 31(11):1320-1343, September 2012.
  • Adam Bry, Abraham Bachrach and Nicholas Roy. "State Estimation for Aggressive Flight in GPS-Denied Environments Using Onboard Sensing". Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). St Paul, MN 2012. (Nominated, best conference paper.).
    [PDF] [BiBTeX Entry]
  • A. Huang, A. Bachrach, P. Henry, M. Krainin, D. Maturana, D. Fox, N. Roy. "Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera", Proceedings of the International Symposium of Robotics Research (ISRR), Flagstaff, AZ, 2011.
    [PDF] [Bibtex Entry]
  • A. Bachrach, S. Prentice, R. He, and N. Roy. "RANGE - Robust Autonomous Navigation in GPS-denied Environments". Journal of Field Robotics. 28(5):646-666, September 2011.
    [Compressed postscript] [PDF] [Bibtex Entry]
  • A. Bachrach, R. He, N. Roy. "Autonomous Flight in Unknown Indoor Environments". International Journal of Micro Air Vehicles, 1(4): 217-228, December 2009.
  • Ruijie He, Sam Prentice and Nicholas Roy. "Planning in Information Space for a Quadrotor Helicopter in a GPS-denied Environments''. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2008). Los Angeles, 2008.
  • Abraham Bachrach, Alborz Garamifard, Daniel Gurdan, Ruijie He, Sam Prentice, Jan Stumpf and Nicholas Roy. "Co-ordinated Tracking and Planning using Air and Ground Vehicles''. In Proceedings of the International Symposium on Experimental Robotics (ISER), Athens, 2008.

Learning for Highly Dynamic Planning and Control

Navigation in unknown environments is difficult. As an autonomous robot explores an environment it has never seen, it must construct a map as it travels and replan its route as obstacles are discovered. Traditional planning algorithms require that the robot avoid situations that might cause it to crash, and therefore treat unobserved space as potential unseen obstacles. Naturally, navigation can be slow in cluttered spaces or even hallways, in which most robots must slow dramatically whenever rounding a corner.

Some of our recent work has involved using machine learning to capture (at training time) the local environmental geometry so that we may predict (at run time) the probability that taking an action that guides it into unknown space will cause the robot to collide. Consequently, though the robot is now allowed to violate strict safety constraints, we maintain 100% empirical safety, yet see impressive improvements in speed (see here for video). Relatedly, we have also developed an adaptation of this technique that re-introduces empirical safety guarantees. The learning algorithm instead predicts which actions will give the robot an information gain so that it may reach the goal faster (e.g. taking wider turns around corners so that it need not slow down as much).

Our ongoing research direction involves using learning to augment the performance of autonomous planning and control tasks, including intelligent navigation of more topologically complex environments and using monocular camera images to predict collision probability at high speeds.

  • Charlie Richter and Nicholas Roy. "Bayesian Learning for Safe High-Speed Navigation in Unknown Environments''. In Proceedings of the International Symposium on Experimental Robotics (ISER), Tokyo, 2016. [PDF]
  • Charlie Richter, William Vega-Brown, and Nicholas Roy. "Bayesian Learning for Safe High-Speed Navigation in Unknown Environments''. In Proceedings of the International Symposium on Robotics Research (ISRR), Sestri Levante, 2015. [PDF]

Natural Language Understanding for Human-Robot Interaction

Advances in robot autonomy have moved humans to a different level of interaction, where the ultimate success hinges on how effectively and intuitively humans and robots can work together to correctly accomplish a task. However, most service robots currently require fairly detailed low-level guidance from a trained operator, which often leads to constrained and non-intuitive interaction.

Alternatively, natural language provides a rich, intuitive and flexible medium for humans and robots to communicate information. Our goal is to enable robots to understand natural language utterances in the context of their workspaces. We seek algorithmic models that bridge the semantic gap between high-level concepts (e.g. entities, events, routes, etc.) embedded in language utterances and their low-level metric representations (e.g. cost maps and point clouds) necessary for a robot to act in the world.

We have developed probabilistic models like Generalized Grounding Graphs and Distributed Correspondence Graphs to infer a grounding for language descriptions in the context of the agent’s perceived representation. In recent work, we introduced Adaptive Distributed Correspondence Graphs for efficient reasoning about abstract spatial concepts.

Our ongoing research focuses on acquiring semantic knowledge about the environment from observations or language descriptions. This allows the robot to ground commands that refer to past events or acquired factual knowledge. Another area of research addresses language understanding for specifying high-level tasks like search and rescue operations. Further, we are also investigating language understanding in partially known environments and exploration strategies for acquiring new and unknown concepts.

  • R. Paul, J. Arkin, N. Roy, T. M. Howard, “Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Manipulators”. Robotics Science and Systems 2016. (Best conference paper)
  • Thomas Howard, Stefanie Tellex, Nicholas Roy. "A Natural Language Planner Interface for Mobile Manipulators." International Conference on Robotics and Automation (ICRA), Hong Kong, 2014.
  • Stefanie Tellex, Ross Knepper, Adrian Li, Daniela Rus and Nicholas Roy. "Asking for Help Using Inverse Semantics." Proceedings of Robotics Science and Systems (RSS), Berkeley, CA, 2014. (Best conference paper)
  • S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. Teller, N. Roy. "Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation", Proceedings of the National Conference on Artificial Intelligence (AAAI), San Francisco, CA, 2011.

Enabling Semantic Understanding for Autonomous Marine Robots

The oceans cover over 70% of the Earth’s surface, yet less than five percent of this important biosphere has been explored to date. Much of the vast marine environment is dangerous or inaccessible to human divers. Thus, the task of exploring Earth’s oceans will fall to marine robots. However, the development of exploratory marine robots has been stymied by the marine environment's unique challenges. The lack of radio communication forces all human control to pass through high latency, low-bandwidth acoustic channels or hardwire tethers. These conditions necessitate the development of comprehensive and robust robot autonomy.

Our work in this area is split between two complementary thrusts: 1) learning an abstract representation of underwater image data that is conducive to semantic reasoning, and 2) using that abstract representation to build probabilistic models of the robot’s visual environment that allow more efficient exploratory path planning, anomaly detection, and mission data summarization.

We plan to address the problem of learning a meaningful feature representation of underwater images using deep learning. Even given the impressive performance of deep learning algorithms on computer vision problems, this is still challenging. Underwater images are very visually distinct from standard image datasets and there are no large corpora of labeled underwater image data available. Our current research direction involves using unsupervised convolutional autoencoders or minimally-supervised transfer learning frameworks to learn a latent feature representation of underwater image data.


Result of running HDP spatiotemporal topic model on image data from marine robot mission. Visually distinct terrains are clustered into different topics (colors).

Given this abstract feature representation, we are applying various probabilistic models to represent the robot’s knowledge about the observable world. Topic models provide a natural probabilistic framework for both anomaly detection and data summarization. Much of our previous work has focused on extending the Hierarchical Dirichlet Process (HDP), a Bayesian nonparametric topic model, to the real-time, spatiotemporal image data from a marine robot’s video stream. Our ongoing research direction involves building more sophisticated hierarchical topic models that allow a robot to understand the environment at multiple levels of abstraction.


Wind Field Estimation and Planning


A wind field estimate across the MIT campus.

With unmanned aerial vehicles (UAVs) becoming more prolific and capable, and regulations evolving, their eventual operation in urban environments seems all but certain. As UAVs begin to fly in these environments, they will be presented with a host of unique challenges. One of these challenges will be the complex wind fields generated by urban structures and terrain. Although much effort has been directed towards developing planning and estimation strategies for wind fields at high altitudes or in large open spaces, these approaches contain an implicit assumption that the wind field evolves over relatively large temporal and spatial scales. Given this simplification, a history of local measurements can be used to estimate the global wind field with sufficient accuracy. However, urban wind fields are highly variant in both space and time and are therefore resistant to this estimation method and require an approach that models the complex interaction between the flow and surrounding environment.

Our approach is to use prevailing wind estimates from local weather stations and a 3D model of the surrounding environment as inputs to a computational fluid dynamics solver to obtain both steady and unsteady wind field estimates. Unlike many approaches, these wind field estimates account for the strong coupling between the wind flow and nearby structures. Once obtained, these wind field estimates can be used to find minimum-energy trajectories between points of interest. Further work hopes to leverage a library of precomputed wind fields to find a wind field covariance estimate within a region. This uncertainty estimate could be used to infer a global wind field from local measurements, or predict future wind conditions. ---

Understanding Natural Language Commands


Our system understands commands
such as "Pick up the tire pallet off
the truck and set it down."

Natural language is an intuitive and flexible modality for human-robot interaction. A robot designed to interact naturally with humans must be able to understand instructions without requiring the person to speak in any special way. We are building systems that robustly understand natural language commands produced by untrained users. We have applied our work to understanding spatial language commands for a robotic wheelchair, a robotic forklift, as well as a micro-air vehicle. More information is at http://spatial.csail.mit.edu.

  • Thomas Kollar, Stefanie Tellex, Deb Roy and Nicholas Roy. "Grounding Verbs of Motion in Natural Language Commands to Robots", International Symposium on Experimental Robotics (ISER), New Delhi, India, Dec. 2010. [PDF]
  • Stefanie Tellex, Thomas Kollar, George Shaw, Nicholas Roy, and Deb Roy. "Grounding Spatial Language for Video Search," Proceedings of the Twelfth International Conference on Multimodal Interfaces (ICMI), 2010. (Winner, Best Student Paper award.) [PDF]
  • Thomas Kollar, Stefanie Tellex, Deb Roy and Nick Roy, "Toward understanding natural language directions," Human-Robot Interaction 2010. [PDF]

Human-Robot Interaction for Assistive Robots

We are developing planning and learning algorithms that can be used to optimize awheelchair dialogue manager for a human-robot interaction system. The long-term goal of this research is to develop intelligent assistive technology, such as a robotic wheelchair, that can be used easily by an untrained population. We are working with the residents and staff of The Boston Home, a specialized care residence for adults with advanced multiplesclerosis and other progressive neurological diseases, to develop an intelligent interface to the residents' wheelchairs. An adaptive, intelligent dialogue manager will be essential for allowing a diverse population with a variety of physical and communication impairments to interact with the system.

  • F. Doshi and N. Roy. "Spoken Language Interaction with Model Uncertainty: An Adaptive Human-Robot Interaction System''. Connection Science, To appear.
  • Finale Doshi and Nicholas Roy. "The Permutable POMDP: Fast Solutions to POMDPs for Preference Elicitation''. Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008). Estoril, Portugal, 2008.

Planning Under Uncertainty

Continuous-state POMDPs provide a natural representation for a variety of tasks, including many in robotics. However, existing continuous-state POMDP approaches are limited by their reliance on a single linear model to represent the world dynamics. We have developed new switching-state (hybrid) dynamics models that can represent multi-modal state-dependent dynamics, and a new point-based POMDP planning algorithm for solving continuous-state POMDPs using this dynamics model. Additionally, POMDPs have succeeded in many planning domains because they can optimally trade between actions that increase an agent's knowledge and actions that increase an agent's reward. Unfortunately, most real-world POMDPs are defined with a large number of parameters which are difficult to specify from domain knowledge alone.

We have shown that the POMDP model parameters can be incorporated as additional hidden states in a larger 'model-uncertainty' POMDP, and we have developed an approximate algorithm for planning in the induced `model-uncertainty' POMDP. This approximation, coupled with model-directed queries, allows the planner to actively learn the true underlying POMDP and the accompanying policy.

  • E. Brunskill, L. Kaelbling, T. Lozano-Perez and Nicholas Roy. "Continuous-State POMDPs with Hybrid Dynamics''. Proceedings of the Tenth International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, 2008.
  • F. Doshi, J. Pineau and N. Roy. "Bayes Risk for Active Learning in POMDPs''. Proceedings of the International Conference on Machine Learning (ICML), Helsinki, Finland, 2008, pp. 256-263.

Exploration

Mapping as a research problem has received considerable attention in robotics recently. Mature mapping techniques now allow practitioners to reliably and consistently generate 2-D and 3-D maps of objects, office buildings, city blocks and metropolitan areas with a comparatively small number of errors. Nevertheless, the ease of construction and quality of map are strongly dependent on the exploration strategy used to acquire sensor data. We have shown that reinforcement learning can be used to optimize the trajectory of a vehicle exploring an unknown environment. One of the primary technical challenges of exploration is being able to predict the value of different sensing strategies efficiently. We have shown that a robot can learn the effect of sensing strategies from past experience using kernel-based regression techniques. The local regression model can then be used inside a global planner to optimize a trajectory. We have demonstrated this technique both for a mobile robot building a map of an unknown environment, and an airborne mobile sensor collecting data for weather prediction.

  • T. Kollar and N. Roy. "Trajectory Optimization using Reinforcement Learning for Map Exploration''. International Journal of Robotics Research, 27(2): 175-197, 2008.
  • N. Roy, H. Choi, D. Gombos, J. Hansen, J. How and S. Park. "Adaptive Observation Strategies for Forecast Error Minimization''. Proceedings of the International Conference on Computational Science, Beijing, 2007.

Mobile Manipulation

Robot manipulators largely rely on complete knowledge of object geometry in order to plan their motion and compute successful grasps. If an object is fully in view, the object geometry can be inferred from sensor data and a grasp computed directly. If the object is occluded by other entities in the environment, manipulation based on the visible part of the object may fail; therefore, to compensate, object recognition is often used to identify the location of the object and compute the grasp from a prior model. We are developing algorithms for geometric inference and manipulation planning that allow grasp plans to be computed with only partial information about the objects in the environment and their geometry. We are developing these ideas both for small-object manipulation in the home, and large-object supply-chain manipulation.

  • J. Glover, D. Rus and N. Roy. "Manipulation using Probabilistic Models of Object Geometry''. Proceedings of Robotics: Science and Systems (R:SS), 2008.