Post-Desktop User Interfaces: iStuff and the Search for the Great Unified Input Theory

Jan Borchers, Stanford Computer Science
   borchers@stanford.edu

Seminar on People, Computers, and Design
Stanford University October 18, 2002

In a way, Mark Weiser's vision of ubiquitous computing has long since become a reality: Today, most of our interaction with computers is with embedded devices surrounding us - from car electronics, to cell phones, to consumer devices -, and not with a desktop computer using the now-traditional graphical user interface. However, the ubicomp goal of calm technology disappearing into the background has not been achieved; instead, new technologies have made life more stressful. One reason for this dilemma is that HCI research has not as yet managed to provide user interface metaphors and modalities for those new systems "beyond the desktop" that have been proven effective, domain-appropriate, and satisfying enough. Why is that?

We observed that, while tools have made GUIs easy to prototype and experiment with, research in post-desktop UIs often gets bogged down in technical details: Just try to add a simple physical button to your research prototype, and you will find yourself soldering PCBs, running wires, and writing serial device drivers before you know it.

To address this situation, we have begun to create the iStuff toolkit of wireless physical user interface components that makes integrating post-desktop devices into your user interface prototype as simple as adding a line of Java code to your application. The toolkit leverages our existing ubiquitous computing infrastructure, which allowed us to move much of the device complexity into a proxy computer in the environment, making the devices themselves simple, cheap, and easy to reproduce or replace with commercial technology. A flexible software framework that creates multiple abstraction layers of device and event semantics, and our interactive PatchPanel event intermediary, make the toolkit platform-independent, and allow dynamic integration of new devices into an application without even relaunching it.

Deploying our toolkit has profoundly facilitated our exploration of fundamental questions about human-computer interaction in post-desktop environments: What is the relative usefulness of different modalities to manipulate information in an interactive room? What do focus and selection mean in a multi-user, multi-machine, multi-application, multi-device, and multi-display environment? What, if anything, will be the equivalent of mouse and keyboard in the world beyond the desktop? In other words, we hope to, while maybe not find, nevertheless at least get a little closer to the Great Unified Input Theory of the post-desktop era that could give new meaning to the letters of the "GUI" acronym.

Jan Borchers is Acting Assistant Professor of Computer Science at Stanford University, where he works on Human-Computer Interaction in the Interactivity Lab. His current research interests include user interface frameworks for ubiquitous computing, interactive environments, and interaction with multimedia. He has also led a series of award-winning interactive multimedia exhibit projects since 1995 that let you do fun things such as conduct the Vienna Philharmonic. He holds a PhD in Computer Science from Darmstadt University in Germany for his work on Design Pattern Languages for HCI, which led to his pioneering book "A Pattern Approach To Interaction Design". Jan has published his work in dozens of international conference and journal papers, and enjoys playing jazz piano, going rock climbing, and playing Bad Golf that justifies the capitalization.

.

View this talk on line at CS547 on Stanford OnLine

Titles and abstracts for all years are available by year and by speaker.

For more information about HCI at Stanford see

Overview Degrees Courses Research Faculty FAQ