GUIDe Splash
GUIDe LogoGaze-enhanced User Interface Design

GUIDe Project Status

Please Note: The GUIDe research project at Stanford has been concluded and is no longer active. There are no research positions available for working on this project. The publications section of this website provides all publicly released information on this project. You may also check out the GUIDe playlist on YouTube which contains all the videos related to this project, including an hour long talk which covers all aspects of the research.

EyePassword

Our paper on Reducing Shoulder-surfing Using Gaze-based Password-entry published at SOUPS (Symposium on Usable Privacy and Security) 2007 is now available in the publications section.

EyePoint video demonstration

The following video shows a quick 5 minute overview of our work on a practical solution for pointing and selection using gaze and keyboard. Please note, our objective is not to replace the mouse as you may have seen in several articles on the Web. Our objective is to provide an effective interaction technique that makes it possible for eye-gaze to be used as a viable alternative (like the trackpad, trackball, trackpoint or other pointing techniques) for everyday pointing and selection tasks, such as surfing the web, depending on the users' abilities, tasks and preferences. For a more detailed overview of this work and related work in the area of gaze-based scrolling, please refer to the papers listed in the publications section.

research overview

The eyes are a rich source of information for gathering context in our everyday lives. We look at the eyes in order to determine who, what, or where in our daily communication. A user's gaze is postulated to be the best proxy for attention or intention. Using eye-gaze information as a form of input can enable a computer system to gain more contextual information about the user's task, which in turn can be leveraged to design interfaces which are more intuitive and intelligent. Eye gaze tracking as a form of input was primarily developed for users who are unable to make normal use of a keyboard and pointing device. However, with the increasing accuracy and decreasing cost of eye gaze tracking systems it will soon be practical for able-bodies users to use gaze as a form of input in addition to keyboard and mouse – provided the resulting interaction is an improvement over current techniques. This research explores how gaze information can be effectively used as an augmented input in addition to traditional input devices.

The focus of this research is to augment rather than replace existing interaction techniques. Adding gaze information provides viable alternatives to traditional interaction techniques, which users may prefer to use depending upon their abilities, tasks and preferences. This work presents a series of novel prototypes that explore the use of gaze as an augmented input to perform everyday computing tasks. In particular, it explores the use of gaze-based input for pointing and selection, application switching, password entry, scrolling, zooming and document navigation. It presents the results of user experiments which compare the gaze-augmented interaction techniques with traditional mechanisms and show that the resulting interaction is either comparable to or an improvement over existing input methods. These results show that it is indeed possible to devise novel interaction techniques that use gaze as a form of input without overloading the visual channel and minimizing false activations.

We also discuss some of the problems and challenges of using gaze information as a form of input and proposes solutions which, as discovered over the course of the research, can be used to mitigate these issues. Finally, we present an analysis of technology and economic trends which make it likely for eye tracking systems to be produced at a low enough cost, that when combined with the right interaction techniques, they would create the environment necessary for gaze-augmented input devices to become mass-market.

The eyes are one of the most expressive features of the human body for non-verbal, implicit communication. Interaction techniques which can use gaze-information to provide additional context and information to computing systems have the potential to improve traditional forms of human-computer interaction. This research provides the first steps in that direction.


Copyright © 2006-2007 Manu Kumar, Stanford University
All rights reserved.