FALL 2004
CS376: Research Topics in HCI / Course Projects

Tuesday (11:00AM - 12:15PM) & Thursday (10:50AM - 12:00PM), Gates 392

Project Format

In this course, you will motivate (through fieldwork), prototype, and evaluate a novel user interface in a quarter-long research project. Projects will be in pairs.

Project Proposal

Due by noon on Monday, 10/11, via email to cs376@cs

This proposal is not set in stone. It should briefly outline who your target users are, what the user interface needs to do (not what it looks like), and how you propose to evaluate it.

 

Project Pairs

User Interfaces for Teaching and Learning
Kristen Blair (kpilnerATstanford.edu) and Kevin Hartman (khartmanATstanford.edu)

Goal: To create a series of interfaces through which students can communicate and deliver information to a computer agent. The interfaces enable the student to teach the agent before it attempts to complete some sort of performance task. Examples include young children learning early literacy skills by teaching an agent about letters and phonemes, children teaching an agent called Moby strategies for hypothetico-deductive reasoning, and middle school students teaching an agent called Betty about interdependency in the context of a 3D game world. This work springs off of a Dan Schwartz’s research projects on teachable agents, though no research has been done on interfaces geared toward human-agent teaching interactions.

Milestone #1 Word Document | Milestone #2 Word Document

 


Dynamic Speedometer
Taemie Kim (tammykimATstanford.edu) and Manu Kumar (sneakerATstanford.edu)

In this project we will try to change the behavior of automobile drivers (Persuasive Technology in the Automobile) to observe the speed-limit more closely. The Dynamic Speedometer instruments the speedometer in an automobile to display the current speed limit in effect in real-time as part of the speedometer display. This relieves the driver of the task of waiting for a speed-limit sign on the road to determine what is the current speed-limit in effect. It is our expectation that making drivers constantly - yet subtly - aware of the speed-limit while driving will have an impact on the driver’s behavior and hopefully will reduce the instances of speeding.

For this project we will prototype several different designs of the dynamic speedometer and then build/customize a driving simulator to include a display of the current speed-limit on the speedometer. Subjects will either be presented with a conventional speedometer or with a dynamic speedometer and will be assigned a cover-task which requires them to drive through areas with varying speed-limits.

We will measure the number of times the driver exceeds the speed limit, for how long and by how much over the course of the test with both a conventional speedometer and a dynamic speedometer.

Milestone #1 Word Document | Milestone #2 Word Document

 

Making Sense of Modern Art
Sheila Vyas (svyasATstanford.edu) and Susie Wise (swiseATstanford.edu)

Sheila Vyas and Susie Wise propose to partner to develop an interface for museum learning that would allow for embodied and/or social experience of educational media that contextualizes works of art.

We know that the museum experience is a social one (Hein 1998, Woodruff et al 2000, Leinhardt and Knutson 2004), but museums (especially art museums) tend to design educational media for stand alone computer kiosks which privelege a one person to one mouse configuration. We would like to build on smart table and wearable computing experiments to develop a multi-user, environmental interface.

Preliminarily, we are exploring large-scale room-based interfaces (iRoom possibilities), tangible interfaces, and mobile, vision-triggered interfaces (Susie currently launching a head-camera project at the Cantor Art Center on campus).

Milestone #1 Word Document | Milestone #2 Word Document

 

Feasibility study of entering football/basket ball/... analysis data using multiple modalities
Hartti Suomela (harttiATcsli.stanford.edu)

Opponent analysis (and also analysis of own team) requires recording data of player and ball movement and events on the field. This includes both spatial data (field position) as well as attributes related to that field position including player data (who did something) and event properties (jump shot, dribble, bounce pass, etc). Spatial data can be entered either using video analysis (quite expensive) or drawing using for example tablet PC or Anoto pen. Attributes could be entered using for example dictation (vocabulary is quite limited, but level of background noise could pose a problem).
These two data streams could be synchronized to create quite detailed game data ready for analysis. The purposes of this project is to study if drawing and dictating are feasible methods to be used in here (do they allow real-time record keeping), either by a single person or by two separate persons.

Milestone #1 Word Document | Milestone #2 TXT Document

 

Social trends in mobiles…
Gerald Yu (gerald.yuATstanford.edu) and Nirav Mehta (niravbmATstanford.edu)

With the increasing prevalence of camera phones, we are already seeing a large number of photographs being exchanged among various social groups like friends and family. Traditionally mobiles users seek out quick, casual, entertainment experiences. Keeping this in mind I am looking to research and develop certain entertainment mechanisms which leverage all 3 factors I mentioned above – photographs, social interaction in a select group and provide an option for a short & possibly interruptible span of entertainment.
Some of the options I am looking forward to (but not limited to) are –

1. Know your friends!
Photographs taken by you and your social circle are all uploaded to a central image server. When any of the group has free time on his hands, he/she decides to check out his knowledge of any one other person in the group. His task will be to identify the place/date/subject of his friend’s picture as closely as he can. The higher the number of pictures he can identify this way, the greater will be his score. Motivation of this game would be to try and keep up to date about your friends’ lives even when you may be geographically separated.

2. Co-operative annotation of pictures of a common trip by 2 or more members of the group on the cell-phone.
Each of the players will be sent out a picture from the central image server where the photographs from the groups last outing are stored. This player annotates the picture he has received. The annotated picture is immediately sent to the other player and the image server. Both of them get instant gratification by seeing labelled pictures of their trip (I always liked seeing an annotated picture!). In this we could achieve the twin goal of getting an otherwise arduous task of annotation of pictures done and reliving the memories for the players.

Milestone #1 Word Document | Milestone #2 Word Document

 

Self-evaluating user interface toolkit:
Bjoern Hartmann (bjoernATstanford.edu)

Modeling realistic usage scenarios for software systems can be challenging without the input of "real-word" users. However, for a variety of reasons, in-depth evaluation studies are rarely carried out.
A toolkit that automatically gathers statistical information about how a deployed software is actually utilized in everyday situations could positively inform future development and significantly cut cost of conducting evaluation. I envision a toolkit that periodically monitors a software's user interface state, generates descriptive statistics of UI states over time and periodically communicates this information back to the software vendor. For example, frequency counts of menu command executions may help identify menu layout inefficiencies. Sub-sequences of commands may help to reconstruct work flow patterns not anticipated by the designers. The toolkit could be modular to easily "plug-in" to an existing code project. By relying on partial data from many users, computational overhead for each individual user can be minimized to avoid performance degradation. Privacy issues may be a concern, but informed consent and anonymous collection may mitigate. (Inspiration taken from a presentation on self-reporting software by Prof. Aiken)

Milestone #1 PDF Document | Milestone #2 Word Document


Adaptive Messaging
Mike Brzozowski (zozoATstanford.edu) and John Hu (hujATstanford.edu)

Communications needs have evolved rapidly over the last decade—even over the last few years. Yet e-mail and IM have done little to fit in with a new generation of increasingly mobile—and busy—students. E-mail has changed little from the folders-and-mailboxes metaphor of the Sixties, and today people feel they need to create 2-3 IM accounts to manage their presence. With e-mail, IM, and text message-enabled mobile phones, information overload seems a way of life. How do users tell what’s really urgent and monitor several threads of discussion efficiently? How can smarter presence awareness/prediction help fit the social needs of mobile students? Similar work by Horvitz focused on more predictable office workers; we’ll tackle college students’ erratic schedules. This will be in conjunction with a cs229 project, seeking to apply machine learning to adapt to a specific user’s social and communication patterns, in either the e-mail or IM spheres.

Milestone #1 Word Document | Milestone #2 Word Document

 

Network security Visualizations
Doantam Phan (dphanATstanford.edu)

Conduct a contextual inquiry of several network security administrators at Stanford to understand how they investigate an attack. That information will guide the creation of a prototype system that allows users to explore a set of flow data. Users will be able to drill down into the visualization by types. They will be able to break down byte flows by port, time, or some other metrics which were indicated to us by our contextual inquiry.

Milestone #1 Word Document | Milestone #2 Word Document

 

Image-tagging system for digital photographers
Jessica Kuo (jrskuoATstanford.edu) and Allen Rabinovich (allenraATstanford.edu)

A lack of an easy method to tag digital images with relevant information is a major usability breakdown in the process of digital photography. Many existing tools that allow post-factum image tagging are tedious and slow, especially in cases of high-volume photography, and do not appear to be widely used. The tagging that occurs at the time of image capture includes only numerical information about the image, and no relevant information about the content.

I propose creating a new system that will allow digital photographers to add a large amount of relevant information immediately at/after the moment the picture has been taken. The system consists of hardware updates to a standard digital camera and a piece of software for post-processing the additional information.

The hardware updates to a digital camera include a GPS receiver, a digital compass, and a digital audio recorder. The GPS receiver provides location data, while a compass provides the directional data. A digital audio recorder allows the user to record a few words about the image either before or immediately after it was taken to an associated audio file.

The post-processing software extracts the GPS and directional data and uses USGS databases to determine the name of the location at which the image was taken, and possibly the subject of the image (if, for example, the camera was pointed at a famous landmark.) It also extracts the associated audio file and uses voice-recognition software to convert the spoken words to text that is then appended to the image tag.

Milestone #1 Word Document | Milestone #2 TXT Document

 

A System for Collaborative Co-Located Information Organization
Brian Lee (baleeATcs.stanford.edu) and Matt Wright (mjwrightATstanford.edu)
The goal of this project is to enable a group of co-located users to efficiently collect and organize information, using an augmented environment such as Stanford Interactive Room. An example scenario would involve a small group (roughly 3-5 people) bringing data from various sources (digital camera pictures, Word documents, Web pages, handwritten notes, etc.) into the iRoom and collaboratively browsing, analyzing, and filtering the material into a coherent presentation, e.g., a Web site or project report. The challenges in this project are
two-fold: (1) given the technologies available (public displays, laptops, PDAs, other digital devices), designing a user experience that integrates them while taking advantage of the best features of each; and
(2) studying and understanding how groups work in the new environment

Milestone #1 Word Document | Milestone #2 Word Document

 

Distorting Maps for Navigation through Georeferenced Photographs on Mobile Devices
Dan Maynes-Aminzade (monzyATstanford.edu)
Several interfaces have been developed that allow users to browse through collections of georeferenced photographs using a map; clusters of dots on the map correspond to the locations in which the photos have been taken, and the user can scroll or zoom around the map to find the photos he is looking for. These interfaces work well on large, high-resolution displays, but do not adapt well to small screens (such as those on mobile devices or on the camera itself) because the photos are often widely dispersed and require a large map area to be displayed. However, the map display is ineffiecient in that many areas of the map contain no photographs. I plan to develop a method for distorting the map based on the locations of photographs to give greater prominence to areas with more photographs, and decrease the overall screen resolution required to display the map.

Milestone #1 Word Document | Milestone #2 Word Document

 

Collaborative Learning on the DiamondTouch
Anne Marie Piper (ampiper@stanford.edu) and Merrie Ringel (merrie@stanford.edu)
DiamondTouch tables have unique affordances for group work. The shared display provides an interactive workspace for collaboration and organization of virtual artifacts. This particular device would be a powerful tool when implemented in an educational context. I plan to explore how a tabletop interface with individual audio channels affects learning to solve problems in small groups. Students face the challenge of assimilating knowledge that is distributed among members. How could the affordances of a DiamondTouch table facilitate team problem solving tasks? How might this interactive workspace affect patterns of collaboration in small groups? What challenges do instructors face with small group exercises that this technology could alleviate?

For this project it is necessary to create a Java application that supplies learners with problem solving tasks. Evaluation will examine patterns of collaboration and participation by group members. Qualitative data analysis will reveal user centered design issues and inform the design of future CSCL technologies.

Milestone #1 Word Document | Milestone #2 Word Document

 

Project Starting Points