Current Research Projects

Lazlo - Humanoid Face Project Lazlo looking in a mirror
Aaron Edsinger, Una-May O'Reilly, Brian Scassellati, Chris Scarpino and Cynthia Breazeal
link to ps file link to pdf file
To date neither Cog nor Lazlo have faces. Our goal is to design and fabricate an iconic, humanoid face for each robot that fosters a suitable social contract between the robot and humans. Another goal of this project is to shift the robot aesthetic to a design language that utilizes strong curvilinear and organic forms through state of the art design processes and materials.

With a well designed face, Cog and Lazlo will be able to convey an appropriate social aspect. This will assist them in regulating interaction, receiving appropriate stimulus and, in the longer term, learning imitation tasks.

As of this writing, we have fabricated and installed the first prototype of the face on the robot head platform, Lazlo.

current projects button

Learning Ego-motion Relations Via Sensorimotor Correlation Cog plays with a slinky
Matthew Marjanovic
link to ps file link to pdf file
Our group has been developing a robotic torso, called Cog, with of the intention of creating a test-bed on which to study theories of cognitive science and artificial intelligence. The goal is to create a robot which is capable of interacting with the world — including both objects and people — in a human-like way, so that we may study human intelligence by trying to implement it.

Such interaction requires rich sensory and motor apparatus: our two-armed, two-eyed robot has over twenty actuated joints, and twice as many sensors, ranging from torque sensors on motors to the four cameras composing the eyes. To control and coordinate so many degrees of freedom, one could in theory measure the properties of each joint, limb, lens, and CCD, pull out a physics textbook, and work out the interacting kinematics by hand. However, this is a very brittle approach, requiring extensive analysis and simulation of a particular mechanism — an effort which, even if it could be completed for such a complex system, may be wasted if the mechanism is modified or less than perfectly calibrated. A far more robust approach is to allow the robot to interact with its environment and learn a predictive model of that interaction from the experience itself.

The goal of this project is to develop a relatively general system by which Cog can learn the causal relations between commands to its motors and input from its sensors, primarily vision and mechanical proprioception. This way, the robot can learn first hand how its own movement is reflected in perceptible activity in the external world. And, conversely, such a model will allow the robot to decide how to generate actions based on the intended effect.

Such causal relationships are the root of the sense of kinesthesia, as well as the beginnings of what could be considered a sense of self. By embedding knowledge of the effects of actions directly in their sensory results, one can avoid the classic "symbol grounding problem'' of artificial intelligence.


Meso: A Biochemical Subsystem for a Humanoid Robot Cog turns a crank
Bryan Adams
link to ps file link to pdf file
While humanoid robotics often focuses on building behaviors around vision and occasionally hearing or touch, a great deal of the information governing the organization and execution of limb movement comes from the energy metabolism that supplies muscles with energy. However, a robot has an infinte supply of energy, and as such lacks these senses (or even a reasonable mechanical analog to use as a sensor). A robot must have some notion of energy consumption to successfully create humanoid movement.

As we try to build robots that can effectively operate in a human environment, biological models have inspired many of our efforts. Theories of human attention inspired visual subsystems, neurological structures inspired arm control modules, and speech pattern recognition in infants inspired an auditory sense. By creating behaviors that respond to environmental stimulus, the robot not only acts more human, but also has a better context to decipher and imitate the behavior of other humans.


Theory of Mind for a Humanoid Robot Cog and Brian Scassellati
Brian Scassellati
link to ps file link to pdf file
One of the fundamental social skills for humans is a theory of other minds. This set of skills allows us to attribute beliefs, goals, and desires to other individuals. To take part in normal human social dynamics, a robot must not only know about the properties of objects, but also the properties of animate agents in the world. This research project attempts to implement basic social skills on a humanoid robot using models of social development in both normal and autistic children.

Human social dynamics rely upon the ability to correctly attribute beliefs, goals, and percepts to other people. This set of metarepresentational abilities, which have been collectively called a "theory of mind", allows us to understand the actions and expressions of others within an intentional or goal-directed framework. The recognition that other individuals have knowledge, perceptions, and intentions that differ from our own is a critical step in a child's development and is believed to be instrumental in self-recognition, grounding in linguistic acquisition, and possibly in the development of imaginative and creative play. These abilities are also central to what defines human interactions. Normal social interactions depend upon the recognition of other points of view, the understanding of other mental states, and the recognition of complex non-verbal signals of attention and emotional state.

[Cog Home], [Current Research], [FAQ], [History], [Humanoid Robotics Group Home],
[Overview], [People], [Video], [Publications]

webmaster: annika