Tasking Communities of Agents Through Adaptable Multimodal Interfaces
Adam Cheyer, SRI International
cheyer@ai.sri.comSeminar on People, Computers, and Design
Stanford University May 28, 1999
In today's computing environments, people interact with programs, web sites, devices, and other computational resources individually, one at a time. However, as multitudes of software agents begin populating our netspace, the predominant interface paradigm may shift towards a more delegation-based style, with the user saying: "Here's what I want to do, you programs figure out who needs to be involved."
Within the context of a distributed framework called the Open Agent Architecture (OAA) , we have been exploring ways in which interactions among software components can be made flexible and cooperative. Simultaneously, we are looking at new user interface (UI) approaches for interacting with our evolving communities of agents (programs). Some of the UI qualities we're most interested in include:
- Dynamic: As new agents join or leave the community, what the user can say and do changes. Users may interact with individual applications or specify complex tasks that involve many automated participants.
- Adaptable display: A user should be able to access the same set of agent capabilities from any class of interface environment, ranging from the richness of immersive 3D environments, to standard HTML browsers, all the way down to a simple telephone or 2-way pager. Presentation of results must adapt accordingly.
- Multimodal: When a professor at a blackboard communicates with a classroom of students, information is conveyed through simultaneous combinations of drawing, writing, speaking and gesturing. Tasking a community of software agents should be as easy.
- Multi-user Collaborative: Agents cooperate with each other when solving tasks. The human-agent interaction framework should also enable multiple users to work with each other and with agents in the same workspace.
- Social: Once agent systems begin to accept naturally expressed requests from one or more users, this opens the possibility for agents to take on a more lifelike presence. How will the use of avatars and improved dialog capabilities change user-agent interactions?
Our progress and thoughts in these areas will be illustrated through examples and demonstrations taken from several OAA-based applications.
Adam Cheyer is a computer scientist at SRI International 's Artificial Intelligence Center, and is co-director of SRI's Computer Human Interaction Center (CHIC!).
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see
h