CS547 Human-Computer Interaction Seminar (Seminar on People, Computers, and Design)
Fridays 12:50-2:05 · Gates B01 · Open to the public- 20 years of speakers
- By year
- By speaker
- Videos: iTunesU · YouTube
|
Jan Borchers · Stanford Computer Science
Post-Desktop User Interfaces: iStuff and the Search for the Great Unified Input Theory October 18, 2002 In a way, Mark Weiser's vision of ubiquitous computing has long since become a reality: Today, most of our interaction with computers is with embedded devices surrounding us - from car electronics, to cell phones, to consumer devices -, and not with a desktop computer using the now-traditional graphical user interface. However, the ubicomp goal of calm technology disappearing into the background has not been achieved; instead, new technologies have made life more stressful. One reason for this dilemma is that HCI research has not as yet managed to provide user interface metaphors and modalities for those new systems "beyond the desktop" that have been proven effective, domain-appropriate, and satisfying enough. Why is that? We observed that, while tools have made GUIs easy to prototype and experiment with, research in post-desktop UIs often gets bogged down in technical details: Just try to add a simple physical button to your research prototype, and you will find yourself soldering PCBs, running wires, and writing serial device drivers before you know it. To address this situation, we have begun to create the iStuff toolkit of wireless physical user interface components that makes integrating post-desktop devices into your user interface prototype as simple as adding a line of Java code to your application. The toolkit leverages our existing ubiquitous computing infrastructure, which allowed us to move much of the device complexity into a proxy computer in the environment, making the devices themselves simple, cheap, and easy to reproduce or replace with commercial technology. A flexible software framework that creates multiple abstraction layers of device and event semantics, and our interactive PatchPanel event intermediary, make the toolkit platform-independent, and allow dynamic integration of new devices into an application without even relaunching it.
Deploying our toolkit has profoundly facilitated our exploration of fundamental questions about human-computer interaction in post-desktop environments: What is the relative usefulness of different modalities to manipulate information in an interactive room? What do focus and selection mean in a multi-user, multi-machine, multi-application, multi-device, and multi-display environment? What, if anything, will be the equivalent of mouse and keyboard in the world beyond the desktop? In other words, we hope to, while maybe not find, nevertheless at least get a little closer to the Great Unified Input Theory of the post-desktop era that could give new meaning to the letters of the "GUI" acronym. |
|

