3DDI: 3D Direct interaction
John Canny, Computer Science Division, UC Berkeley
Seminar on People, Computers, and Design
Stanford University March 13, 1998
The 3DDI project is about direct interaction with simulated or remote 3D worlds. Users interact with the world without gloves or motion capture sensors and view the world stereoscopically without glasses. 3D interaction preserves the spatial relationships and body language cues among a group of people in ways that 2D video cannot. It also supports a strong form of direct interaction where humans see no interface at all, only 3d objects in the world.
The project involves 3 campuses and covers the technologies from real-time 3d capture through physical modeling, to rendering on autostereoscopic and volumetric displays. This talk will summarize the project, and give details of 3 of its components: real-time depth capture, physical behavior prototyping, and volumetric display. 3DDI is one of the seeds of a much larger effort at Berkeley toward Human-Centered Computing (HCC). I will talk about one other seed, which is strong telepresence through "PRoPs" or robot avatars, and then say something about what the rest of the HCC effort will comprise.
John Canny is a professor in Computer Science at UC Berkeley. He came from MIT in 1987 after his thesis on robot motion planning, which won the ACM dissertation award. He received a Packard Foundation Fellowship and a PYI while at Berkeley. His robotics work was on path planning, grasping and the co-creation (with Ken Goldberg) of RISC robotics, which is a fusion of algorithmic intelligence and traditional manufacturing hardware. He has worked in applied computational geometry and with Brian Mirtich on the development of a physically-based simulator called IMPULSE. He developed inexpensive, ubiquitous telepresence robots called "PRoPs", which evolved from airborne to terrestrial locomotion. Two years ago, he started the 3DDI project on direct 3D interaction with researchers from Berkeley, MIT and UCSF. 3DDI includes the balanced co-development of simulation and rendering algorithms with radically new hardware for acquiring and displaying "into the world".
Titles and abstracts for all years are available by year and by speaker.
For more information about HCI at Stanford see