How to Change the World and be Home for Dinner:
    The art, science, and myths of videoconferencing

Milton Chen and Erica Chuang, VSee Lab

Seminar on People, Computers, and Design
Stanford University March 11, 2005

Existing videoconferencing systems often distort conversational cues such that the person, rather than the medium, is viewed with negative attributes. For example, a delayed response due to video transmission may cause the person to be viewed as slow. Lip movements not synchronized with speech due to video compression may cause the person to be viewed as less credible. And difficulties with eye contact due to camera placement may cause the person to be viewed as unfriendly. The power of "bad" video to do "harm" is one of the fundamental reasons that videoconferencing is still not ubiquitous despite its introduction by AT&T in 1927.

In this talk, we will describe experiments conducted at Stanford University on the characteristics of "bad" video and methods to overcome these factors. We will introduce VSee, the videoconferencing software based on these discoveries. Next, we will describe United Nation's deployment of VSee in Indonesia for tsunami relief, Department of Defense's deployment of VSee in Iraq and Afghanistan, and telework consortium's remote office experiments. Lastly, we will challenge some common myths of videoconferencing.

Dr. Milton Chen is the Chief Technology Officer of VSee Lab. Milton's pioneering research at Stanford University has shown why videoconferencing has failed to become ubiquitous despite billions in investments since 1927. His unique insight in how to make video communication an everyday experience has led to more than 30 invited talks to major research institutions around the world. Milton received a bachelor's degree in Computer Science from UC Berkeley and a PhD in Electrical Engineering from Stanford University.

Dr. Erika Chuang is a research scientist at VSee Lab. Her research interests include video mediated communication, computer vision, computer graphics, and machine learning, in particular the study of human facial expressions and body language and their application in computer animation, humancomputer interaction, and multimedia. Erika received her BS/MEng degrees in EECS from MIT, and a PhD from Stanford University. Prior to joining VSee, she worked on acquiring 3D facial models, for virtual conferencing at HP and for special effects at Disney Animation Studio.


View this talk on line at CS547 on Stanford OnLine

Titles and abstracts for previous years are available by year and by speaker.