Eyepatch: Prototyping Camera-based Interaction through Examples

Project Abstract

Eyepatch is a rapid prototyping tool designed to simplify the process of developing computer vision applications. Eyepatch allows novice programmers to extract useful data from live video and stream that data to other rapid prototyping tools, such as Adobe Flash, d.tools, and Microsoft Visual Basic. Eyepatch has two basic modes:

  • A training mode, where users can train different types of classifiers to recognize the object, region, or parameter that they are interested in.
  • A composition mode, where the classifiers are composed in various ways and users specify where the output of the classifiers should go.

Eyepatch allows designers to create, test, and refine their own customized classifiers, without writing any specialized code. Its diversity of classification strategies makes it adaptable to a wide variety of applications, and the fluidity of its classifier training interface illustrates how interactive machine learning can allow designers to tailor a recognition algorithm to their application without any specialized knowledge of computer vision.

 

Publications

Maynes-Aminzade, D., T. Winograd, and T. Igarashi. Eyepatch: Prototyping Camera-based Interaction through Examples. UIST: ACM
Symposium on User Interface Software and Technology, 2007.

 

Media

Video:

UIST Eyepatch Overview (WMV, 24MB)

 

Software

Eyepatch Installer for Windows (25 MB)
Report an Eyepatch bug


People

Dan Maynes-Aminzade
Terry Winograd

 

Contact

Dan Maynes-Aminzade (monzy at stanford dot edu)

Questions about the website? Contact: hci-webmaster at lists.stanford.edu