|
Publications
2004
- Arsenio, Artur M. "Children, Humanoid Robots and Caregivers", Fourth International Workshop on Epigenetic Robotics 2004.
- Arsenio, Artur M. "Developmental Learning on a Humanoid Robot", IEEE International Joint Conference on Neural Networks, Budapest, 2004.
- Arsenio, Artur M. "Learning Task Sequences from Scratch: Applications to the Control of Tools and Toys by a Humanoid Robot", IEEE Conference on Control Applications, 2004.
- Arsenio, Artur M. "Map Building from Human Computer Interactions", IEEE CVPR Workshop on Real-Time Vision for Human Computer Interaction, 2004.
- Arsenio, Artur M. "Object Recognition from Multiple Percepts", submitted to IEEE-RAS/RSJ International Conference on Humanoid Robots, 2004.
- Arsenio, Artur M. "Teaching a Humanoid Robot from Books", In International Symposium on Robotics, 2004.
- Arsenio, Artur M. "Teaching Humanoid Robots like Children: Explorations into the World of Toys and Learning Activities", submitted to IEEE-RAS/RSJ International Conference on Humanoid Robots, 2004.
- Arsenio, Artur M. "Towards an Embodied and Situated AI", In International FLAIRS Conference. 2004, Nominated for Best Paper Award.
- Arsenio, Artur M. "Figure/Ground Segregation from Human Cues", IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2004.
- Arsenio, Artur M. "An Embodied Approach to Perceptual Grouping", IEEE CVPR Workshop on Perceptual Organization in Computer Vision, 2004.
- Arsenio, Artur M. "On Stability and Tuning of Neural Oscillators: Application to Rhythmic Control of a Humanoid Robot", International Joint Conference on Neural Networks, 2004.
- Arsenio, Artur M. "Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver’s Gift", Ph.D. Thesis, M.I.T. 2004.
- Arsenio, Artur M. "Exploiting Amodal Cues for Robot Perception", submitted to special issue of International Journal of Humanoid Robotics.
- Arsenio, Artur M. and Paul Fitzpatrick. "Exploiting Cross-Modal Rhythm for Robot Perception of Objects", 2nd International Conference on Computational Intelligence, Robotics, and Autonomous Systems, Singapore, December 15 - 18, 2003.
- Aryananda, Lijin and Jeff Weber. "MERTZ: A Quest for a Robust and Scalable Active Vision Humanoid Head Robot", submitted to IEEE-RAS/RSJ International Conference on Humanoid Robots.
- Brooks, Rodney, Lijin Aryananda, Aaron Edsinger, Paul Fitzpatrick, Charles Kemp, Una-May O'Reilly, Eduardo Torres-Jara, Paulina Varshavskaya, and Jeff Weber. "Sensing and Manipulating Built-for-Human Environments", International Journal of Humanoid Robotics, Vol 1, No. 1, 2004.
- Edsinger-Gonzales, Aaron . "Design of a Compliant and Force Sensing Hand for a Humanoid Robot", Proceedings of the International Conference on Intelligent Manipulation and Grasping, 2004.
- Edsinger-Gonzales, Aaron and Jeff Weber. "Domo: A Force Sensing Humanoid Robot for Manipulation Research", in submission to Proceedings of the IEEE/RSJ International Conference on Humanoid Robotics, 2004.
- Fitzpatrick, Paul. "The DayOne Project: How Far Can a Robot Develop in 24 Hours?", Accepted for the Fourth International Workshop on Epigenetic Robotics, Genoa, August, 2004.
- Fitzpatrick, Paul and Artur Arsenio. "Feel the Beat: Using Cross-Modal Rhythm to Integrate Perception of Objects, Others, and Self", Accepted for the Fourth International Workshop on Epigenetic Robotics, Genoa, August, 2004.
- Fitzpatrick, Paul and Eduardo Torres-Jara. "The Power of the Dark Side: Using Cast Shadows for Visually-Guided Reaching", Submitted to Humanoids 2004.
- Kemp, Charles C. "Duo: A Wearable System for Learning About Everyday Objects and Actions", submitted to 8th IEEE International Symposium on Wearable Computers, 2004.
- Torres-Jara, Eduardo and Jessica Banks. "A Simple and Scalable Force Actuator", International Simposium of Robotics 2004.
2003
- Arsenio, Artur M. "Active Vision for Sociable, Mobile Robots", In Proceedings of the Second International Conference on Computational Intelligence, Robotics, and Autonomous Systems - Special session in Robots with a Vision, Singapore, 2003.
- Arsenio, Artur M. "Embodied Vision - Perceiving Objects from Actions", IEEE International Workshop on Human-Robot Interactive Communication, 2003.
- Arsenio, Artur M. "A Robot in a Box", Accepted for publication to the 11th International Conference on Advanced Robotics (ICAR'03), Coimbra, Portugal, July 2003.
-
Arsenio, Artur M. "Towards
Pervasive Robotics", Accepted for publication to the International
Joint Conference on Artificial Intelligence (IJCAI'03), Acapulco,
Mexico, August 2003.
-
Arsenio, Artur M. "Object
Segmentation Through Human-Robot Interactions in
the Frequency Domain", submitted to SIBGRAPI'03.
-
Arsenio, Artur, Paul Fitzpatrick, Charles C. Kemp, Giorgio Metta.
"The
Whole World in Your Hand: Active and Interactive Segmentation",
Accepted for publication in the Third International Workshop on
Epigenetic Robotics, Boston, 2003.
-
Fitzpatrick, Paul. "First
Contact: An Active Vision Approach to Segmentation", Accepted
for publication at the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Las Vagas, Nevada, October 27 - 31,
2003.
-
Fitzpatrick, Paul. "From
First Contact to Close Encounters: A Developmentally Deep Perceptual
System for a Humanoid Robot", PhD thesis, MIT, 2003.
-
Fitzpatrick, Paul and Giorgio Metta. "Grounding
Vision Through Experimental Manipulation", Accepted for publication
in the Philosophical Transactions of the Royal Society: Mathematical,
Physical, and Engineering Sciences. 2003.
-
Fitzpatrick, Paul. "Object
Lesson: Discovering and Learning to Recognize Objects", Accepted
for publication at the 3rd International Conference on Humanoid
Robots, Karlsruhe, Germany, October 1 - 2, 2003.
- Fitzpatrick, Paul. "Open Object Recognition for Humanoid Robots", SPIE Robotics and Machine Perception newsletter, 12(2), pp. 9, September 2003.
-
Fitzpatrick, Paul. "Perception
and Perspective in Robotics", Accepted for publication at the
25th Annual Meeting of the Cognitive Science Society, Boston,
2003.
- Fitzpatrick, Paul and Charles Kemp. "Shoes as a Platform for Vision", Proceedings of the 7th IEEE International Symposium on Wearable Computers, pp. 231-234, White Plains, New York, October 2003.
-
Fitzpatrick, Paul, Giorgio Metta, Lorenzo Natale, Sajit Rao
and Giulio Sandini. "Learning
About Objects Through Action - Initial Steps Towards Artificial Cognition",
Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA), Taipei, Taiwan, May 12 - 17, 2003.
- Kemp, Charles C. "Duo: A Human/Wearable Hybrid for Learning About Common Manipulable
Objects", Proceedings of the 3rd International IEEE/RAS Conference on Humanoid Robots,
Karlsruhe, Germany, October, 2003.
-
Marjanovic, Matthew J. "Teaching
an Old Robot New Tricks: Learning Novel Tasks via Interaction with People
and Things", PhD thesis, MIT Department of Electrical Engineering
and Computer Science, June 2003.
-
Martin, Martin C. "The
Essential Dynamics Algorithm: Essential Results", Artificial
Intelligence Memo AIM-2003-0014, Massachusetts Institute of Technology,
May 2003.
-
Metta, Giorgio and Paul Fitzpatrick. "Better
Vision Through Manipulation", Accepted for publication in Adaptive
Behavior, 2003.
2002
- Aryananda, Lijin. "Recognizing
and Remembering Individuals: Online and Unsupervised Face Recognition
for Humanoid Robot", IEEE/RSJ International Conference
on Intelligent Robots and Systems, Lausanne, Switzerland, 2002.
- Fitzpatrick, Paul and Giorgio Metta, "Towards
Manipulation-Driven Vision ", IEEE/RSJ International Conference
on Intelligent Robots and Systems, Lausanne, Switzerland, 2002.
- Fitzpatrick, Paul. "Role
Transfer for Robot Tasking", Massachusetts Institute of Technology,
Department of Electrical Engineering and Computer Science, PhD Thesis
Proposal, Cambridge, MA, 2002.
- Metta, Giorgio and Paul Fitzpatrick, "Better
Vision through Manipulation", Second International Workshop
on Epigenetic Robotics, Edinburgh, UK, August 2002.
- Metta, Giorgio, L. Natale, S. Rao, G. Sandini, "Development
of the mirror system: a computational model". In Conference
on Brain Development and Cognition in Human Infants. Emergence of Social
Communication: Hands, Eyes, Ears, Mouths. Acquafredda di Maratea,
Italy. June 7-12, 2002.
- Varchavskaya (Varchavskaia), Paulina. "Behavior-Based
Early Language Development on a Humanoid Robot", Second
International Workshop on Epigenetic Robotics, Edinburgh, UK, August
2002.
- Varchavskaia, Paulina. "Early
Pragmatic Language Development for an Infant Robot", Massachusetts
Institute of Technology, Department of Electrical Engineering and Computer
Science, Master's Thesis, Cambridge, MA, February 2002.
2001
- Adams, Bryan. "Learning
Humanoid Arm Gestures". Working Notes - AAAI Spring Symposium
Series: Learning Grounded Representations, Stanford, CA. March
26-28, 2001, pp. 1-3.
- Breazea, Cynthial, A. Edsinger, P. Fitzpatrick, B. Scassellati.
"Active Vision for Sociable Robots". Socially Intelligent Agents
- The Human in the Loop, Special Issue IEEE Transactions on Man, Cybernetics,
and Systems, Part A: Systems and Humans. Kerstin Dautenhahn (ed.),
Volume 31, number 5, pp. 443-453, September 2001.
- Breazeal, Cynthia. "Affective
interaction between humans and robots", The Sixth European Conference
on Artificial Life (ECAL01). Prague, Czech Republic, 2001.
- Breazeal, Cynthia."Emotive Qualities
in Robot Speech", Proceedings of the 2001 IEEERSJ International
Conference on Intelligent Robots and Systems (IROS01), Maui, HI,
2001.
- Breazeal, Cynthia. "Regulation
and Entrainment in Human-Robot Interaction". The International
Journal of Experimental Robotics, 2001.
- Breazeal, Cynthia and Brian Scassellati. "Challenges
in Building Robots that Imitate People," in Kerstin Dautenhahn and
Chrystopher Nehaniv, eds., Imitation in Animals and Artifacts,
MIT Press, 2001.
- Fitzpatrick, Paul. "Head
Pose Estimation Without Manual Initialization," Term Paper for MIT
Course 6.892, Cambridge, MA, 2001.
- Fitzpatrick, Paul."From
Word-spotting to OOV Modelling," Term Paper for MIT Course 6.345,
Cambridge, MA, 2001.
- Metta, Giorgio. "An Attentional
System for a Humanoid Robot Exploiting Space Variant Vistion," IEEE-RAS
International Conference on Humanoid Robots 2001, Tokyo, Japan,
Nov. 22--24, 2001.
- Scassellati, Brian. "Foundations
for a Theory of Mind for a Humanoid Robot", Massachusetts Institute
of Technology, Department of Electrical Engineering and Computer Science,
Cambridge, MA, PhD Thesis, June 2001.
- Scassellati, Brian."Discriminating
Animate from Inanimate Visual Stimuli," International Joint
Conference on Artificial Intelligence, Seattle, Washington, August
2001.
- Scassellati, Brian. "Investigating
Models of Social Development Using a Humanoid Robot," in Barbara
Webb and Thomas Consi, eds., Biorobotics, MIT Press, to appear,
2001.
- Varchavskaia, Paulina, Paul Fitzpatrick and Cynthia Breazeal.
"Characterizing and Processing Robot-Directed
Speech," IEEE-RAS International Conference on Humanoid Robots
2001, Tokyo, Japan, Nov. 22-24, 2001.
2000
- Adams, Bryan. "Meso: A
Virtual Musculature for Humanoid Motor Control", Massachusetts Institute
of Technology, Department of Electrical and Computer Science, Master
of Engineering Thesis, Cambridge, MA, September 2000.
- Edsinger, Aaron. "A Gestural
Language for a Humanoid Robot", Massachusetts Institute of Technology,
Department of Electrical Engineering and Computer Science, Master's
Thesis, Cambridge, MA, 2000.
- Adams, Bryan, Cynthia Breazeal, Rodney Brooks and Brian Scassellati.
"Humanoid Robots: A New Kind of Tool".
IEEE Intelligent Systems, July-August 2000.
- Breazeal, Cynthia and Lijin Aryananda. "Recognition
of Affective Communicative Intent in Robot-Directed Speech".
IEEE-RAS International Conference on Humanoid Robots 2000.
- Breazeal, Cynthia. "Sociable Machines:
Expressive Social Exchange Between Humans and Robots". Massachusetts
Institute of Technology, Department of Electrical Engineering and Computer
Science, PhD Thesis, May 2000.
- Breazeal, Cynthia, Aaron Edsinger, Paul Fitzpatrick and Brian
Scassellati. "Social Constraints
on Animate Vision". IEEE-RAS International Conference on Humanoid
Robots 2000.
- Breazeal, Cynthia, Aaron Edsinger, Paul Fitzpatrick, Brian
Scassellati and Paulina Varchavskaia. "Social
Constraints on Animate Vision". IEEE Intelligent Systems,
July-August 2000.
- Scassellati, Brian."Theory
of Mind for a Humanoid Robot," First IEEE/RSJ International
Conference on Humanoid Robotics, September, 2000. **Best Paper
Award**
- Scassellati, Brian."Theory
of mind... for a robot", presented at the American Association
of Artificial Intelligence Fall Symposium on Social Cognition and Action,
Cape Cod, Massachusetts, November, 2000.
- Scassellati, Brian."Parallel
social cognition?", presented at the American Association of
Artificial Intelligence Fall Symposium on Parallel Cognition, Cape
Cod, Massachusetts, November, 2000.
- Edsinger, Aaron and Una-May O'Reilly. "Designing
a Humanoid Robot Face to Fulfill Social Contracts". ROMAN-2000,
Fall 2000.
1999
- Breazeal, Cynthia and Brian Scassellati. "A
Context-dependent Attention System for a Social Robot". In Proceedings
of the Sixteenth International Joint Conference on Artificial Intelligence
(IJCAI99), Stockholm, Sweden, pp.1146-1151, 1999.
- Breazeal, Cynthia and Brian Scassellati. "How
to Build Robots That Make Friends and Influence People". IROS99,
Kyonjiu, Korea, 1999
[Back to Top]
2004
- Torres-Jara, Eduardo. "A Hand Prototype", internal MIT presentation, 2004.
- Kemp, Charlie C. "Shoes as a Platform for Vision", 7th IEEE International Symposium on Wearable Computers.
2003 - add links
- Arsenio, Artur. "Exploiting Cross-Modal Rhythm for Robot Perception of Objects", CIRAS 2003.
- Fitzpatrick, Paul. "First Contact: an Active Vision Approach to Object Segmentation", IROS 2003.
- Fitzpatrick, Paul. "Object Lesson: Discovering and Learning to Recognize Objects", 3rd International IEEE/RAS Conference on Humanoids Conference, Karlsruhe, Germany, October 2003.
- Fitzpatrick, Paul. "The Whole World in Your Hand: Active and Interactive Segmentation", EPIROB 2003, Boston, MA, USA, August 2003.
- Fitzpatrick, Paul. "From First Contact to Close Encounters: building a Developmentally Deep Perceptual System for a Humanoid Robot", thesis defense talk, MIT AI Lab, May 5, 2003.
2002
- Arsenio, Artur. "Macaco:
Acts and Mind in Accord", MIT AI Lab, Living Breathing Robots
Group, March 2002.
- Brooks, Rodney. "Natural
Tasking of Robots Based on Human Cues", DARPA Mobile Autonomous
Robot Software, February, 2002.
- Fitzpatrick, Paul. "Better
Vision through Poking", MIT AI Lab, Humanoid Robotics Group,
June 2002.
- Marjanovic, Matthew. "What
Has Matto Been Up To?", MIT AI Lab, Living Breathing Robots
Group, May, 2002.
- Metta, Giorgio. "Better
Vision through Manipulation", Neuro-Engineering Workshop
and Advanced School, magazzini dell'Abbondanza, Genova, Italy,
June 2002.
2001
- Adams, Bryan. "Learning
Humanoid Arm Gestures", AAAI Spring Symposium Series: Learning
Grounded Representations, Stanford, CA, March 26-28, 2001.
- Brooks, Rodney. "Natural
Tasking of Robots Based on Human Cues", DARPA Mobile Autonomous
Robot Software '01 PI Meeting, San Diego, CA. March 22, 2001.
- Fitzpatrick, Paul.
"Head Pose Estimatin
Without Manual Initialization," Presentation for MIT Course 6.892,
Cambridge, MA, 2001
- Marjanovic, Matthew. "meso:
Simulated Muscles for a Humanoid Robot", MIT AI Lab, Humanoid
Robotics Group, August, 2001.
- Metta, Giorgio. "Lazlo's
Stuff", MIT AI Lab, Living Breathing Robots Group, August,
2001.
- Scassellati, Brian. "Foundations
for a Theory of Mind for a Humanoid Robot", Massachusetts Institute
of Technology, Department of Electrical Engineering and Computer Science,
Cambridge, MA, PhD Thesis Defense, May 2001.
- Scassellati, Brian, Bryan Adams, Aaron Edsinger and Matthew
Marjanovic. "Natural Tasking
of Robots Based on Human Cues", DARPA Mobile Autonomous Robot
Software '01 PI Meeting, San Diego, CA, Poster Presentation, March
22, 2001.
- Varchavskaia, Paulina. "Notes
on Natural Language for Robots", MIT AI Lab, Humanoid Robotics Group,
February 2001.
2000
- Brooks, Rodney. "Natural
Tasking of Robots Based on Human Interaction Cues - MIT AI Lab".
DARPA Mobile Autonomouse Robot Software BAA9909, May 23, 2000.
- Breazeal, Cynthia. "Sociable
Machines: Expressive Social Exchange Between Humans and Robots".
PhD Thesis Defense, May 2000.
- Brooks, Rodney. "Computational
Environments". MIT AI Lab, Humanoid Robots Group, March 29, 2000.
- Breazeal, Cynthia, Rodney Brooks and Brian Scassellati.
"Natural Tasking of Robots
Based on Human Interaction Cues". MARS Workshop, January
11-13, 2000.
1999
- Scassellati, Brian."Scaz's
sok tutorial". MIT AI Lab, Humanoid Robots Group, November 3, 1999.
- Breazeal, Cynthia, Rodney Brooks and Brian Scassellati.
"Natural Tasking of Robots
Based on Human Interaction Cues". DARPA Mobile Autonomous Robot
Software BAA9909, July 1999.
[Back to Top]
- This video shows a 2 DOF active vision system and a 3 DOF prototype arm and visual system pushing a block off of the table. It is a simple demonstration of the embedded behavior based controller performing three behaviors: a zero-force, highly compliant mode, arm tracking of the target, and visual closed loop control of the arm to poke the target.
- This video shows a 2 DOF active vision system and a 3 DOF prototypte arm tracking a simple target, demonstrating the integration of the system's visual system with its motor system on an embedded architecture.
- Click to view a testing prototype of a simple and scalable rotary force actuator (SEA) that is compact and easy to build. When the actuator is not controlled it is quite stiff. When it is controlled at zero force , it complies with gravity. When it is operating under force control, the actuator moves but handles resistance appropriately.
- This clip shows the arm stiffness before and after Cog learns the
implications of gravity. Before learning the movements are less accurate
and exact. During learning Cog samples the postures of its workspace
and refines the force function it uses to supply feed forward commands
to posture its arm. Learning results in improved arm movement. [12.6MB]
- In this clip Cog's torso is moving randomly under reflexive control.
When extremities are reached, Cog's model of pain is activated and the
associated reflex is refined to reduce the extent of its extremity.
As adaptation proceeds, Cog learns to balance itself. [13.4MB]
- Cog's arm and torso movements are displayed in the (top) of the screen.
As Cog moves, the GUI shows the multi-joint muscle model overlaying
Cog's joints and how it behaves. The model itself can be modified by
the GUI shown at the (bottom) of the screen. [7.1MB]
- Cog's two degrees of freedom hand, equipped with tactile sensors,
has a reflex that grasps and extends in a manner similar to primate
infants. Contact inside the hand causes a short term grasp, contact
to the back of the hand causes an extensive stretch. [6.4MB]
- Cog is trying to identify its own arm. It generates a particular
rhythmic arm movement and sees this. It correlates the visual signature
of the motion with its commands to move the arm and thus forms a representation
of the arm in the image. [6.9MB]
- In these two videos Cog reaches for an object as identified by its
visual attention system. It recognizes its own arm (shown in green)
and identifies the arm endpoint (a small red square). When the object
is contacted, the object's motion (differentiated from the arm's) is
used as a cue for object segmentation. It's a block! [752KB]
[744KB]
- The M4 robot consists of an active vision robotic head integrated
with a Magellan mobile platform. The robot integrates vision-based navigation
with human-robot interaction. It operates a portable version of the
attentional systems of Cog and Lazlo with specific customization for
a thermal camera. Navigation, social preferences and protection of self
are fulfilled with a model of motivational drives. Multi-tasking behaviors
such as night time object detection, thermal-based navigation, heat
detection, obstacle detection and object reconstruction are based upon
a competition model. [33.6MB]
- Kismet has the ability to learn to recognize and remember people
it interacts with. Such social competence leads to complex social behavior,
such as cooperation, dislike or loyalty. Kismet has an online and unsupervised
face recognition system, where the robot opportunistically collects,
labels, and learns various faces while interacting with people, starting
from an empty database. [47MB]
- Kismet uses utterances as a way to manipulate its environment through
the beliefs and actions of others. It has a vocal behavior system forming
a pragmatic basis for higher level language acquisition. Protoverbal
behaviors are influenced by the robots current perceptual, behavioral
and emotional states. Novel words (or concepts) are created and managed.
The vocal label for a concept is acquired and updated. [7.8MB]
- This video clip shows the pose of a subject's head being tracked.
The initial pose of the head is not known. Whenever the head is close
to a frontal position, its pose can be determined accurately and tracking
is reset. In this example, the mesh is shaded in two colors, showing
where the left and right parts of the face are believed to be. [5.7MB]
- This video clip shows part of a training session in which Kismet
is taught the structure of a sorting task. The first part shows Kismet
acquiring some task-specific vocabulary -- in this case, the word "yellow".
The robot is then shown green objects being placed on one side, and
yellow objects being placed on another. Throughout the task the presenter
is commenting in the shared vocabulary. Towards the end of the video,
Kismet makes predictions based on what the presenter says. [1.5MB]
- This video clip shows an example of Cog mimicking the movement of
a person. The visual attention system directs the robot to look and
turn its head toward the person. Cog observes the movement of the person's
hand, recognizes that movement as an animate stimulus, and responds
by moving its own hand in a similar fashion. [100KB]
- We have also tested the performance of this mimicry response with
naive human instructors. In this case, the subject gives the robot the
American Sign Language gesture for "eat", which the robot mimics back
at the person. Note that the robot has no understanding of the semantics
of this gesture, it is merely mirroring the person's action. [332KB]
- The visual routines that track the moving object operate at 30 Hz,
and can track multiple objects simultaneously. In this movie, Cog is
interested in one of the objects being juggled. The robot attempts to
imitate the parabolic trajectory of that object as it is thrown in the
air and caught. [1MB]
- Cog does not mimic every movement that it sees. Two types of social
cues are used to indicate which moving object out of the many objects
that the robot is tracking should be imitated. The first criterion is
that the object display self-propelled movement. This eliminates objects
that are either stationary or that are moving in ways that are explained
by naive rules of physics. In this video clip, when the robot observes
the ball moving down the ramp, Cog interprets the movement as linear
and following gravity and ignores the motion. When the same stimulus
moves against gravity and rolls uphill, the robot becomes interested
and mimics its movement. [884KB]
- The second social cue that the robot uses to pick out a moving trajectory
is the attentional state of the instructor. (Whatever the instructor
is looking at is assumed to be the most important part of the scene.)
Although our robots currently lack the complex visual processing to
determine the instructor's eye direction, we can accurately obtain the
orientation of the instructor's head and use this information as an
indicator of attention. In this movie, a large mirror has been placed
behind the robot to allow the video camera to record both the robot's
responses and the head orientation of the instructor. When the instructor
looks to the left, the movement of his left arm becomes more salient
and Cog responds by mimicking that motion. When the instructor looks
to the right, his right arm movements are mimicked. [948KB]
- This video clip demonstrates the simple ways that Cog interprets
the intentions of the instructor. Note that unlike the other video clips,
in this example, the instructor was given a specific sequence of tasks
to perform in front of the robot. The instructor was asked to "get the
robot's attention and then look over at the block". Cog responds by
first fixating the instructor and then shifting its gaze to the block.
The instructor was asked to again get the robot's attention and then
to reach slowly for the block. Cog looks back at the instructor, observes
the instructor moving toward the block, and interprets that the instructor
might want the block. Although Cog has relatively little capabilities
to assist the instructor in this case, we programmed the robot to attempt
to reach for any target that the instructor became interested in. [632KB]
- A video clip of Cog's new hand demonstrating various grapsing behaviors.
The 2 degree of freedom hands utilize series elastic actuators and rapid
prototyping technology. [700KB]
- A video clip of Cog's new force-control torso exhibiting virtual
spring behavior. The ability to use virtual spring control on the torso
allows for full body/arm integration and for safe human-robot interaction.
[348KB]
- This video clip shows the attentional/control system of Lazlo (same
kinematics as in COG's head). Visual processing uses color cues (detects
brightly colored blobs and skin tone color), motion (optic flow and
background subtraction), and binocular disparity (used to control vergence).
Inertial (gyros based) image stabilization to external disturbances
is also shown. Particular care has been devoted to the design of the
controller to obtain smooth and continuous movements. [1.3MB]
- In this video clip, Kismet engages people in a proto-dialog. The
robot does not speak any language; it babbles so don't expect to understand
what it's saying. The turn taking dynamics are quite fluid and natural.
The robot sends a variety of turn-taking cues through vocal pauses,
gaze direction and postural changes. The first segment is with two of
Kismet's caregivers. The next two are with naive subjects. The last
is edited from one long interaction. [9.3MB]
- In this video clip, Kismet correctly interprets 4 classes of affective
intent: praise, prohibition, attentional bids, and soothing. These were
taken from cross-lingual studies with naive subjects. The robot's expressive
feedback is readily interpreted by the subjects as well. [5.3MB]
- In this video clip, Kismet says the phrase "Do you really think so"
with varying emotional qualities. In order, the emotional qualities
correspond to calm, anger, disgust, fear, happy, sad, interest. [3.6MB]
- In this video clip, Kismet is searching for a toy. It's facial expression
and eye movements make it readily appearant to an observer when the
robot has discovered the colorful block on the stool. The attention
system is always running and enabling the robot to respond appropriately
to unexpected stimuli (such as the person entering from the right hand
side of the frame to take away the toy). Notice how Kismet appears a
bit crest-fallen when it's toy is removed. [3.9MB]
These three movies show the visual attention system
in action:
- This clip illustrates the color saliency process. The left frame
is the video signal, the right frame shows how the colorful block is
particularly salient. The middle frame shows the raw saliency value
due to color. The bright region in the center is the habituation influence.
[6.8MB]
- This clip illustrates the motion saliency process. [5.9MB]
- This clip illustrates the face saliency process (center) and the
habituation process (right). [14.3MB]
[Back to Top]
|