Looking at Looking at Looking


Golan Levin, Carnegie Mellon University

Seminar on People, Computers, and Design
Stanford University March 7, 2008

This talk will present a survey of Golan Levin's personal research into the "medium of response", with consideration given to the conditions that enable people to experience sustained creative feedback with reactive systems; to the potential for audiovisual abstraction to connect viewers to realities beyond language; and more generally, to information visualization as a mode of art practice. The talk concludes with a presentation of Levin's most recent attempts to create engrossing and uncanny interactions structured by gaze: by endowing responsive artworks with new perceptive capacities — the ability to know where we are looking — and new expressive means, through simulated eyes that can return and meet our own.

Golan Levin's work combines equal measures of the whimsical, the provocative, and the sublime in an exploration of abstract communication and interactivity. Through performances, digital artifacts, and responsive environments, Levin applies creative twists to digital technologies that highlight our relationship with machines, make visible our ways of interacting with each other, and explore the potential for non-verbal communication in cybernetic systems. He is best known for performances and installation works which generate real-time visualizations of their participants' speech and gestures, and for interactive information visualizations which offer new perspectives onto millions of online communications. His current projects employ interactive robotics and machine vision to explore the theme of gaze as a primary new mode for human-machine communication. Levin is Associate Professor of Electronic Time-Based Art at Carnegie Mellon University, where he also holds Courtesy Appointments in the School of Computer Science and the School of Design.

View this talk on line at CS547 on Stanford OnLine or using this video link.

Titles and abstracts for previous years are available by year and by speaker.