Todos
Have at least 3 users from your target user population complete your three tasks using the low-fidelity prototype. Definitely do not use your other teammmates, and avoid CS147 classmates.
Privacy: Please preserve your subjects' privacy in your writeup and in your photos of your interviews, or obtain their consent if you wish to post material that identifies your subjects.
Before Testing: Turn the 3 scenarios you developed last week into "task cards". For example, if the scenario is: "Anna wants to call Bob", you can turn it into a task for your user by writing "Place a call to your friend Bob (650-555-1823)" onto a 3x5-inch index card. First, practice by going through the testing procedure using one of your teammates as a pretend user (aka guinea pig). This is a good way to debug your test before you actually get real people. You should conduct each of the 3 user tests in the same manner. A good way to do this is to write a script for the greeter/facilitator (as described in the Rettig article), and have him or her follow it each time.
During Testing: The greeter/facilitator should explain to the user what you are trying to achieve. Explain to the participant that you are testing the interface, and not him or her. Introduce them to your "computer" (a teammate who is not the greeter), and your 2 or 3 observers. Remember, it can be quite intimidating to have 4 people watching your every action, so be nice to them!
Then, the greeter should demo the prototype to the user. Do not show the participant exactly how to do your tasks; just give him or her a general idea of how to use your system. You can give an example of something specific that is different from your 3 tasks.
After introducing the participant to your system, give him or her the first task card. One of your teammates (other than the greeter) should be the "computer". The computer will respond to the button presses and actions of the user as a real computer would. After the participant is done with the first task, present him or her with the second task card, and so on.
The two observers should make note of critical incidents (good and bad). If the user gets stuck, scribble this down on your note pad, describing why he or she was stuck. If the user says, "Wow! Cool," do the same. Remember, try not to help the users, unless they seem to be stuck forever and throw their hands up in frustration.
Right after the session, you may give your user a questionnaire (or interview) to determine various things: whether he/she liked the system, what parts confused him/her, or any suggestions he/she may have for your system.
After Testing: After the experiments, collect your observations and assign severity ratings to them. If you have ever worked in the software industry, you may have seen a similar thing in Bug Databases (not a bug, minor, blocker, etc.) To maintain consistency, we will use these ratings:
0 -- Not a usability problem.
1 -- Cosmetic problem.
2 -- Minor usability problem.
3 -- Major problem: should fix.
4 -- Usability catastrophe: must fix.
Report: Summarize your findings from your testing in a report. Include the following sections: