Capabilities
Capabilities button


What does the robot do?

Cog is a research platform, and as such, was never intended to carry out any single particular task. Over the years, we have implemented many different capabilities on the robot. Some of these are still being used today, and others are no longer current research topics. To find out what we are working on right now, look at our Current Projects page.

Here are some of the capabilities that we have implemented on Cog:


Human-like Eye Movements

There are four basic types of human eye movement, and our robots are designed to perform very similar movements. Some of these behaviors are learned from experience.

  • Saccades (rapid, ballistic eye movements): Scassellati
  • Smooth pursuit tracking (following a moving target with your eyes): Scassellati
  • Vestibular-occular reflex (a reflex that maintains eye fixation as your head and torso move): Varchavskaia and Scassellati
  • Vergence: coming soon!

Watch it in Action:

quicktime movie

For More Information See:

  • Social Constraints on Animate Vision by Cynthia Breazeal, Aaron Edsinger, Paul Fitzpatrick, and Brian Scassellati

[BACK TO TOP]


Head and Neck Orientation Behaviors

Allows the robot to orient it's head in the direction of a target. author: Scassellati

Watch it in Action:

quicktime movie

For More Information See:

  • Social Constraints on Animate Vision by Cynthia Breazeal, Aaron Edsinger, Paul Fitzpatrick, and Brian Scassellati

[BACK TO TOP]


Face and Eye Detection

The robot can detect people in the environment by looking for patterns of light and dark shading, by looking for oval-like shapes and by looking for regions of skin tone. author: Edsinger and Scassellati

For More Information See:

  • Eye Finding via Face Detection for a Foveated, Active Vision System by Brian Scassellati.
  • Social Constraints on Animate Vision by Cynthia Breazeal, Aaron Edsinger, Paul Fitzpatrick, and Brian Scassellati

[BACK TO TOP]


Imitating Head Nods

The robot imitates a person (or anything with a face), as they nod or shake their head. author: Scassellati

Watch it in Action:

quicktime movie

For More Information See:

  • Imitation and Mechanisms of Shared Attention: A Developmental Structure for Building Social Skills. Autonomous Agents by Brian Scassellati1.

[BACK TO TOP]


Primitive Visual Feature Detectors

The robot's visual system uses a set of primitive feature detectors to find interesting objects in the visual scene. We have already implemented the following detectors:

  • Motion detection (using the temporal differences in consecutive images): Scassellati
  • Skin color filter (finds regions that contain skin-tone pixels): Fitzpatrick
  • Color saliency (finds regions of highly saturated color): Scassellati

For More Information See:

  • A Context-Dependent Attention System for a Social Robot by Cynthia Breazeal and Brian Scassellati.

[BACK TO TOP]


Visual Attention Mechanism

The attention module combines low-level features (from motion, color, and face detectors) with high-level motivational influences and with habituation mechanisms to allow the robot to select interesting objects in the visual scene. author: Scassellati and Breazeal

For More Information See:

  • A Context-Dependent Attention System for a Social Robot by Cynthia Breazeal and Brian Scassellati.

[BACK TO TOP]


Reflex Arm Withdrawal

When the top of Cog's hand would contact an object, it would reflexively withdraw the hand (just as young infants do). author: Marjanovic and Williamson

Watch it in Action:

quicktime movie

[BACK TO TOP]


Reaching to a Visual Target

Cog learned to reach for a visual target. The robot first learned to orient it's head toward the object that it was looking at, and then learned to move it's arm out toward that object. The learning was done without supervision, that is, the robot learned to reach by trial and error without having someone tell it whether it did the right thing. authors: Marjanovic, Scassellati, and Williamson

Watch it in Action:

quicktime movie

For More Information See:

  • Self-Taught Visually-Guided Pointing for a Humanoid Robot by Matthew Marjanovic, Brian Scassellati and Matthew Williamson.
[BACK TO TOP]

Oscillatory Arm Movements

Cog's arms can perform some basic repetitive movements using a pair of coupled neural oscillators at each joint. The oscillators are connected to incoming sensory signals, which gives a very robust behavior. Using the same oscillators in different positions, the robot was able to perform the following tasks: author: Williamson

  • Playing with a slinky
  • Turning a crank
  • Sawing
  • Swinging a pendulum

Watch it in Action:

quicktime movie

For More Information See:

  • Robot Arm Control Exploiting Natural Dynamics by Matthew Williamson.

[BACK TO TOP]


Playing the Drums

Using the neural oscillators, Cog can also hit a drum in a steady rhythm. Cog would listen to the beat that it was hearing and attempt to synchronize to the rhythm that it heard. authors: Marjanovic and Williamson

Watch it in Action:

quicktime movie

[BACK TO TOP]


[Cog Home], [Current Research], [FAQ], [History], [Humanoid Robotics Group Home],
[Overview], [People], [Video], [Publications]

webmaster: annika