The Role of Natural Language in a Multimodal Interface
Philip R. Cohen, Computer Dialogue Laboratory, Artificial Intelligence Center, SRI International
Seminar on People, Computers, and Design
Stanford University January 13, 1993
Although graphics and direct manipulation are effective interface technologies for some classes of problems, they are limited in many ways. In particular, they provide little support for identifying objects not on the screen, for specifying temporal relations, for identifying and operating on large sets and subsets of entities, and for using the context of interaction. On the other hand, these are precisely strengths of natural language. This talk discusses how to build interfaces that blend natural language processing and direct manipulation technologies, using the characteristic advantages of each to overcome weaknesses of the other. Specifically, I will show how to use natural language to describe objects and temporal relations, and how to use direct manipulation for overcoming hard natural language problems involving the establishment and use of context and pronominal reference. This work has been implemented in SRI's Shoptalk system, a prototype information and decision-support system for manufacturing.
Dr. Cohen is Senior Computer Scientist in the Natural Language Group of the Artificial Intelligence Center at SRI, is a Principal Researcher with Stanford's Center for the Study of Language and Information, and a Consulting Associate Professor with the Symbolic Systems Program at Stanford. His current research interests include multimodal interface design and its application to mobile computing and information management; theoretical, computational, and empirical models of dialogue; theories of intention, speech acts, and collaboration; and collaborative simulation-based training.
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see