In Support of Multimedia Conversations
Ricoh Silicon Valley
Seminar on People, Computers, and Design
Stanford University February 11, 2000
This presentation describes a set of research prototypes designed to improve the effectiveness of computer mediated communication. Most electronic communication today lacks the richness of face-to-face interaction. Email users quickly learn about the misunderstandings that can occur with humorous or sarcastic messages (notwithstanding the familiar "smiley-face" emoticon :-) and even experienced correspondents have difficulty conveying the proper urgency or IMPORTANCE of a message. In addition to the emotional content, face-to-face communication also provides a shared context that allows participants to easily reference topics of discussion. Speakers naturally point to images or pick out particular lines from a spreadsheet they may be talking about. Such actions have no counterparts in existing electronic messaging systems.
Our prototypes address the deficiencies of electronic communication through a number of novel human-computer interaction techniques that use naturally recorded speech to convey emotional cues and provide a context for referring to photos, documents, and other multimedia objects. The portable "StoryTrack" device, acts as a kind of digital photo album that explicitly supports the creation of stories or narratives illustrated by the digital photos. Another prototype runs in a more standard application environment and uses a "point & talk" interaction model for easily composing and viewing multimedia messages. Preliminary usage results will be presented that demonstrate the effectiveness of these designs for particular types of conversations.
Greg Wolff leads the information appliances research group at Ricoh Silicon Valley. After receiving degrees in Cognitive Science from MIT (BS) and Carnegie Mellon (MS), Greg developed one of the first WYSIWYG hypertext markup language authoring tools in 1989 while at IBM's Human Factors lab. At Ricoh, he has made contributions to the field of machine learning and automatic speech reading. Current interests include an open source project that will enable non-programmers to develop Web applications and methods for enriching computer mediated communication.
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see