Communicating with Machines Using Speech, Gesture, and Facial Expression
Justine Cassell, MIT Media Lab
Seminar on People, Computers, and Design
Stanford University May 22, 1998
Humans communicate with one another using speech, prosodic cues, hand gestures, gaze and facial expression. When we try to communicate with computers, however, we are reduced to using text or limited speech. And computers respond with text or monotone speech, depriving us of the cues provided by those other communicative modalities. In this talk I will discuss advances in giving computers the ability to produce and interpret not only speech but also intonation, hand gesture, and facial movements. I will show examples of 'embodied conversational agents' -- animated human-like figures who increasingly are able to produce and interpret the full range of human conversational behaviors.
Justine Cassell is faculty at MIT's Media Laboratory. She holds a double PhD from the University of Chicago, in Linguistics and Psychology. Cassell studies how autonomous agents and toys can be designed with psychosocial competencies, based on an understanding of human linguistic, cognitive and social abilities. Current projects include embodied conversational agents, interactive storytelling systems, and toys that encourage technological fluency in both boys and girls. Justine Cassell is co-editor of From Barbie to Mortal Kombat Gender and Computer Games, published by MIT Press (to appear Fall '98), and has published in journals as diverse as Poetics Today and Computer Graphics.
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see