Ten Myths of Multimodal Interaction
Sharon Oviatt, Oregon Graduate Institute
Seminar on People, Computers, and Design
Stanford University February 26, 1999
Our ability to develop robust multimodal systems will depend on knowledge of the natural integration patterns that typify people's combined use of different input modes. This talk will begin by giving an overview of the research goals, methods, and results of our recent research on users' multimodal interaction while speaking and writing to interactive map systems. The results of this work address basic issues such as: (1) when users do and do not interact multimodally, (2) how multimodal input is integrated and synchronized, (3) what propositional content is carried by different modes, and (4) whether users are consistent and similar to one another in their integration patterns. Based on the empirical evidence from this research as well as what the linguistics literature indicates about natural multimodal communication, I will summarize what we currently know about multimodal integration from a cognitive science perspective. Then I will discuss ten currently fashionable computational "myths" about multimodal integration - and how they run contrary to the reality of empirical evidence. The long-term goal of this research is the development of predictive models of natural modality integration to guide the design of emerging multimodal architectures.
Dr. Sharon Oviatt is an associate professor and one of the co-founders of the Center for Human-Computer Communication (CHCC) in the Dept. of Computer Science at the Oregon Graduate Institute of Science & Technology (OGI). She previously has taught and conducted research at the Artificial Intelligence Center at SRI International, and the Universities of Illinois, California, and Oregon State. Her current research focuses on human-computer interaction, interface design for multimodal/multimedia systems and speech systems, portable & telecommunication devices, and highly interactive systems.This work is funded primarily by grants and contracts from the National Science Foundation, DARPA, Intel, Microsoft, Boeing, NTT Data, Southwestern Bell, and other corporate sources. She is an active member of the international HCI and speech communities, has published over 50 scientific articles, and has served on numerous government advisory panels and editorial boards. Her work is featured in recent special issues on "Multimodal Interfaces" appearing in both IEEE Multimedia and Human-Computer Interaction. The content of this talk will appear in an upcoming 1999 issue of Communications of the ACM. Further information about Dr. Oviatt and CHCC is available at http://www.cse.ogi.edu/CHCC.
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see