Technologies for Telecollaboration: Challenges and Opportunities in Image Capture, Rendering and Display

Henry Fuchs, University of North Carolina
fuchs@cs.unc.edu

Seminar on People, Computers, and Design
Stanford University May 9, 1997

 

Dreams of building telecollaboration environments with a strong sense of shared presence are inspiring groups around the world to develop new techniques for scene acquisition, image generation and image display.

Since no ideal solutions to any of the subsystems currently exist, one needs to assess carefully the needs of the chosen applications and the capabilities of the available system components. Our own target applications are currently small-group collaborations on mechanical design, and later, remote surgical and medical consultation.

The realization of a fantasy system, which gives the impression of being in the same room with one's collaborators (and patients), is a long way off. However, early approximations of such a system may soon be realized: a) multi-camera subsystems may acquire 3D image descriptions of each site in real time; b) image-based rendering hardware may rapidly composite multiple properly warped images for each participant's point of view; c) multi-tiled large area projectors and/or miniature displays within eyeglasses may display stereo images properly particularized for each viewer's precise position within the shared environment.

Opportunities for innovation abound: predictive tracking of head pose combined with image warping hardware may facilitate a many-fold increase in image-generation rates by interpolation of intermediate frames from pairs of less-frequently created reference images; miniature video see-through head-mounted displays may enable effective merging of remote and synthetic objects with the wearer's local visual environment; image-compositing architectures may allow near-linear growth in image-generation capability.

Techniques developed to comprehend and interact with remote environments may also be useful to comprehend and interact with nearby but difficult to access environments such as a patient's abdominal cavity.

We will illustrate some of these ideas with results from early experiments with 3D scene acquisition, image-based rendering, post-rendering warp, and with video see-through head-mounted displays.

 

Henry Fuchs (PhD, Utah, 1975) is Federico Gil Professor of Computer Science and Adjunct Professor of Radiation Oncology at UNC-Chapel Hill. He founded the Pixel-Planes / PixelFlow high performance image-generation team, results from which have been licensed by Hewlett-Packard, Division and Ivex. He has lead teams that have developed innovative displays ranging from "true 3D" vari-focal mirror displays to video see-through "augmented reality" head-mounted stereo displays. Among his awards are the 1992 ACM SIGGRAPH Computer Graphics Achievement Award, the 1992 NCGA Academic Award, and the 1997 Medicine-Meets-Virtual-Reality Satava Award. Fuchs was recently elected to the National Academy of Engineering.

 

Titles and abstracts for all years are available by year and by speaker.

For more information about HCI at Stanford see

Overview Degrees Courses Research Faculty FAQ