Video from the Cog Project
General Footage:

Overview of the Cog Project (1999)

These videos consists of interviews with several members of the Cog group demonstrating and discussing their most recent work. To view an accompanying article and alternate formats visit the web page on Cog at MIT Video Productions.

General Footage

Head/Eyes/Visual Routines

  • Head/Eye Orientation
  • Saccadic Eye Movement
  • Saccade to Motion
  • Smooth Pursuit Tracking
  • Vestibulo-Occular Reflex
  • Face Detection
  • Eye Finding
  • Imitation of Head Nods

Arms/Motor Control

  • Reflex Withdrawal
  • A Safety Demonstration
  • Cog's Arms
  • Oscillator-Driven Motor Control
  • Sawing
  • Drumming

[Back to Top]

Social Interaction (1996)

This clip shows some 1995 footage of Cog interacting with Prof. Rod Brooks. The robot is attending to visual motion, and orienting its head and neck toward that motion. The arm and hand motion shown in this clip are simple repetitive motions which are not interactive.

[Back to Top]

Reaching to a Visual Target (1996)

One of the large sensori-motor integration tasks that has been implemented on Cog so far is the ability to reach to a visual target. Cog learns this task by trial and error. The first video clip shows the robot attempting to reach to where it is looking without any learning. Notice that the hand and eyes end up in very different positions.

Each time the robot attempts to reach for a target and fails, it learns from that mistake. By waving its hand, the robot is able to determine the point that it actually reached toward, and can make an incremental refinement based on that error signal. The second video clip shows the images from the robot's camera during one training trial. The high speed change in camera position is the saccade that marks the beginning of the trial. After that, you can see the arm start to move. While the arm is in motion, we fade to the motion detection and grouping display, to show how the robot finds the arm within the visual field.
video button
The final clip shows the robot reaching to visual targets after approximately 3 hours of self training. After training, the robot was instructed to reach towards any moving object. This clip shows the robot reaching for a toy ball that is being waved in front of it. Notice that the hand reflexes are not yet integrated with the reaching behavior, so we need to place the ball into the robot's hand.
[Back to Top]

[Cog Home], [Current Research], [FAQ], [History], [Humanoid Robotics Group Home],
[Overview], [People], [Video], [Publications]

webmaster: annika