|
|||||||||||||||
What does the robot do?
Cog is a research platform, and as such, was never intended to carry out any single particular task. Over the years, we have implemented many different capabilities on the robot. Some of these are still being used today, and others are no longer current research topics. To find out what we are working on right now, look at our Current Projects page. Here are some of the capabilities that we have implemented on Cog: There are four basic types of human eye movement, and our robots are designed to perform very similar movements. Some of these behaviors are learned from experience.
Head and Neck Orientation Behaviors Allows the robot to orient it's head in the direction of a target. author: Scassellati For More Information See:
The robot can detect people in the environment by looking for patterns of light and dark shading, by looking for oval-like shapes and by looking for regions of skin tone. author: Edsinger and Scassellati For More Information See:
The robot imitates a person (or anything with a face), as they nod or shake their head. author: Scassellati Watch it in Action: For More Information See: Primitive Visual Feature Detectors The robot's visual system uses a set of primitive feature detectors to find interesting objects in the visual scene. We have already implemented the following detectors:
For More Information See: The attention module combines low-level features (from motion, color, and face detectors) with high-level motivational influences and with habituation mechanisms to allow the robot to select interesting objects in the visual scene. author: Scassellati and Breazeal For More Information See: When the top of Cog's hand would contact an object, it would reflexively withdraw the hand (just as young infants do). author: Marjanovic and Williamson Watch it in Action: Cog learned to reach for a visual target. The robot first learned to orient it's head toward the object that it was looking at, and then learned to move it's arm out toward that object. The learning was done without supervision, that is, the robot learned to reach by trial and error without having someone tell it whether it did the right thing. authors: Marjanovic, Scassellati, and Williamson
Cog's arms can perform some basic repetitive movements using a pair of coupled neural oscillators at each joint. The oscillators are connected to incoming sensory signals, which gives a very robust behavior. Using the same oscillators in different positions, the robot was able to perform the following tasks: author: Williamson
Watch it in Action: For More Information See: Using the neural oscillators, Cog can also hit a drum in a steady rhythm. Cog would listen to the beat that it was hearing and attempt to synchronize to the rhythm that it heard. authors: Marjanovic and Williamson |
|||||||||||||||
[Overview], [People], [Video], [Publications] |
|||||||||||||||
|
|||||||||||||||
webmaster: annika | |||||||||||||||