Learning from demonstration
In the space environment, teleoperating robots like NASA’s Robonaut can often be slow and tedious due to communications time delay. Instead, we envision a robot that can recognize a teleoperator’s intended motion and autonomously continue the execution of recognized routine tasks, for either the remainder of the task, or enough to reduce an operator’s perceived time delay. To do this, the robot learns offline a library of generalized activities from a training set of user demonstrations. During online operations, the robot can potentially perform real-time recognition of a user’s teleoperated motions, and if requested, autonomously execute the remainder of an activity.
We present an approach to learning complex physical motions from human demonstration that (1) provides flexibility during execution while robustly encoding a human’s intended motions, and (2) automatically determines the relevant features of a motion so that they can be preserved during autonomous execution in new situations.
We also introduce an approach to real-time motion recognition that (1) leverages temporal information to successfully model motions that may be non-Markovian, (2) provides fast real-time recognition of motions in progress by using an incremental dynamic time warping approach, and (3) employs the probabilistic flow tube representation that enables our method to recognize learned motions despite varying environment states.
Autonomous execution demos
Here we have taught the PR2 how to “pour” into a bowl by kinesthetically moving its arm around to demonstrate the motion. A total of five demonstrations were made, all with the right arm. In each demonstration, the initial robot’s arm position and object positions were randomly placed in different setups. The robot has to learn that the bowl is a relevant object in the motion, use stereo vision to determine the location of the relevant object(s), and then autonomously execute the “pour into bowl” task with both arms (keeping in mind we only ever taught it on the right arm).
We have taught the PR2 how to “shake” a bottle by kinesthetically moving its arm around to demonstrate the motion. Motions like this are difficult to specify to the robot through a planner because the trajectory of the motion is of great importance while the goal state is not. Such motions can really benefit from learning from demonstration. In this video, five demonstrations were made of the “shake” motion. The robot learns not only a trajectory of the motion but also the probabilistic spread throughout the motion, which it can use during autonomous execution to determine how much it is allowed to deviate from the trajectory in the face of disturbances.
The user has demonstrated the “move box to platform” motion 5 times, each in different initial positions. This clip shows ATHLETE executing the motion autonomously in the new initial position setup based on the learned demonstrations.