Visual Instruments for an Interactive Mural

ACM SIGCHI CHI99 Extended Abstracts,234-235

Terry Winograd
Computer Science Department
Stanford University
Stanford, CA 94305-9035 USA
+1 650 723 2780

Francois Guimbretiere
Computer Science Department
Stanford University
Stanford, CA 94305-9035 USA
+1 650 723 0618


We present a design for interacting with a prototype device combining a large, high-resolution wall-mounted display and an optically tracked laser pointer. Interactions are supported by an object-based multi-layer open-GL-based graphic system, Millefeuille. We focus here on use-based design considerations for an experimental interface currently being implemented,. based on visual instruments.


Interaction design, laser pointer, tracking, gesture, GUI, pen


We are developing a number of interaction devices to be integrated into an interactive workspace.[7] This paper describes a design for interacting with one of these: a large, high-resolution wall-mounted interactive display called an interactive mural. The mural prototype uses tiled back-projection by 8 VGA projectors onto a 2' by 6' screen. The device is intended for up-close multi-person use, combining the advantages of a workstation (high resolution) and a whiteboard (multi-person with on-screen interaction).

Although many different input devices are being considered for the workspace, the experiment presented here explores the use of one simple device: a hand-held laser pointer tracked by behind-screen cameras. We are studying ways to distinguish multiple pointers (for group work), use pointer orientation information, add secondary devices (like buttons) etc.. The goal of this experiment is to provide functionality with the simplest device - one person using a pointer with a push-button on-off switch for the beam.

This design constraint, along with the benefits (and constraints) of a large vertical surface have led to a design that differs considerably from conventional GUI objects and widgets. We plan to test its effectiveness both on the mural and on other devices that use a direct on-screen pointer or stylus (e.g., a pen-based tablet computer)


The design is based on some key device and use properties:

Pointer-oriented actions at varying distances

A laser pointer can be used in two different modes. Close to the display surface (tip-positioning), the user can position the pointer tip precisely before turning on the beam and can control position accurately while moving. From farther, the user has only an approximate idea of where the beam will appear when turned on (beam-positioning mode), and accuracy is limited by the unsteadiness of the hand. In both modes, the computer does not know the position of the pointer except when the beam is activated (unlike a mouse, for which positioning and clicking are distinct ).

We looked for solutions that would work in both modes, although not necessarily optimal for both. This prohibited, for example, using a button that takes action when the beam goes on (equivalent to mouse-down), since the beam-positioning user cannot be sure where that point will be.

Large display size

Many aspects of standard GUI design assume that it is convenient to reach an object at an arbitrary location on the display, such as a menu-bar or tool-bar at a screen edge. We tried to minimize the use of affordances that would require a "reach". This includes the design of a visual instrument for 2-D manipulation that can modify the size and location of an object without reaching to its edges

Information-centric application model

The overall display is not divided into application regions or windows, but consists of a collection of visual objects, which represent objects in the application domain (documents, charts, drawings, timelines, etc.) The mural object manager dispatches events, such as gestures, to objects. The actions by an object are dependent on its type.

User knowledge

The intended use of the interactive workspace includes interactive examination and modification of complex visual objects (e.g., plans, diagrams, drawings) by professionals using both generic and domain-specific tools. The interface needs to be easy to learn and remember, but need not be obvious to a walk up user with no prior knowledge.
This means that simple conventions and gestures can be used, but overall most of the affordances need to be made visible so they do not require expertise and memory. In addition, we considered more general desiderata, such as providing uniform affordances across the collection of visual instruments, and supporting a clean, coherent visual style that does not use complex "decorations" (borders, scrollbars, toolbars, etc.) to provide a locus for action.


The basic interaction operations are "gesture and sweep" instead of "point, click and type".

The fundamental actions with the laser pointer are stroke-based - gestures over objects and sweeps across visual elements called action bars Both have the same basic structure: the beam goes on, follows some path on the display, and then goes off. This is like Unistroke [3], in that each on/off sequence constitutes an interaction unit. A sweep is a simple gesture (straight or curved line with no reversals) that crosses one or more action bars. A sweep-like gesture that crosses no bars is ignored. In addition some instruments provide a space in which pointer strokes are uninterpreted, leaving digital ink, or being ignored .

The set of interpreted gestures is object-class dependent.

A particular gesture may have one effect over objects of one type, and a different one (or none) over objects of another type.). In some contexts a gesture is interpreted as command, in others it is an input (e.g., a character or digit).

A set of universal gestures apply to all objects.

This corresponds to the universal action buttons on the original Xerox STAR [5],standard menu items (cut, copy, paste, properties, etc.) on conventional GUIs, and universal gesture sets in other pen-based interfaces [1,2].

The overall display consists of a collection of visual objects, which represent objects in the application domain (documents, charts, drawings, timelines, etc.)

The mural object manager differs from conventional display managers, which are based on application-owned windows, bounded by a frame and Z-stacked. The actions that can be done with an object are dependent on its type.

A special class of visual objects are visual instruments, which provide affordances for operating on other objects.

Visual instruments provide functionality including that of the widgets of conventional GUIs: menus, window borders, buttons, palettes, object handles, etc. Some of them are general (as are menus, buttons, etc. in GUIs), while others are built for a specialized kind of object and action.


The simplest instrument is the action bar, which provides a simple way to initiate an action (corresponding to a "click"), The action bar can be oriented in any direction and is activated by sweeping the pointer across the bar (Fig 1.)

1. Sweep over an action bar                 2. Top-level menu

A marking menu (or pie menu [4,6]) is made up of a collection of action bars (Fig 2). It is brought up by making the universal menu gesture over some object (possibly the background). Selection is accomplished by sweeping over one of the bars.

A more specialized visual instrument is the timeline animator, used for playing back time-related data, such as project schedules, video, system histories (for undo), etc.

3. Single-stepping a timeline animator

A stroke across the animator sets it moving in the indicated direction if it is still, or stops it if it is going the opposite direction. A jab (back and forth motion) across the bar single steps in the indicated direction. A properties gesture on the indicator lets the position be entered symbolically.


The design currently includes mockups (non-functioning visual representations) of about 20 coordinated visual instruments. The object-oriented layered graphics system (millefeuille) has been demonstrated in other applications. Implementation of the instruments using millefeuille is underway at the time of submission. Full implementation and initial testing are anticipated by summer.


We thank Pat Hanrahan for his ideas and support, Cindy Chen and James Davis for the pointer tracking, Greg Humphreys and Diane Tang for graphics support, Richard Salvator and Brad Johanson for integration, Kathleen Liston for application domain and interaction design.


1. Apple Computer, MessagePad Handbook, 1993.

2. Carr, R. and Shafer, D. The Power of Penpoint, Addison-Wesley, 1991.

3. Goldberg, D., and Richardson, C., Touch-typing with a style, Proc InterCHI '93, ACM 80-87.

4. Hopkins, D., The Design and Implementation of Pie Menus, Dr. Dobbs Journal, December 1991, 16-26.

5. Johnson, J. et al., The Xerox Star: A Retrospactive, IEEE Computer 22(9), September 1989, 11-26.

6. Kurtenbach, G. Buxton W., Integrating mark-up and direct manipulation techniques, UIST'91ACM, 137-144

7. See <>