We are going to be using Google Analytics to run the A/B test you planned last week.
Google Analytics does not recruit users for you. You will recruit your own users.
Since you will be statistically analyzing the results, recruit at least 20 users. Put another way, that means that each person on your team shoud recruit at least 7 people each: a couple dorm-mates and a few family members and your basically there. You can also use social media like Facebook and Twitter to recruit. You can even recruit other people in class! Make sure to launch your test several days before the deadline. How you schedule your analysis around your implementation plan is up to you. Do what's best for your app and your group. Here's some ideas from Optimizely.
Collect your results from your A/B test. Present your results and discuss your findings. Perform any necessary number-crunching and statistical analysis of your data. Also, assess the strength of your data (for example, using a chi-squared test if applicable). Can you draw solid conclusions or are additional tests needed? What changes would you make based on these results?
For all you designers out there, get ready. It's time to make it pretty. Concentrate on completing the changes based on the results of your in-person test. Attention to detail will serve you well. Make sure the app is optimized for the mobile interface. Same drill as the other weeks, keep updating your development plan.
Note: since we may grade your assignment up to a few days after submission, per the honor code, we expect that the prototype URL show the state of your prototype at the time of submission. You will very likely be updating your prototype after submission, but please do so on another version.
Category | Nope | Adequacy | Proficiency | Mastery |
---|---|---|---|---|
Online Test Results 4 points |
0: No conclusions listed. | 1: Either only statistical analysis or insights are given. | 3: Statistical analysis has errors, or conclusions are trivial. | 4: Statistical analysis is solid, and includes a clear assessment on the strength of the data. Conclusions are clear and straightforward. |
List of Potential Revisions 4 points |
0: No revisions listed. | 1: The revisions are obvious or trivial. The major problems are not addressed by the revisions. | 3: Revisions are made without much consideration to the user experience. The revisions are chosen to cater to a lower level of technical difficulty. | 4: Several possible revisions are presented for different portions of the user interface. The revisions are creative and address the major problems. Changes are clearly justified and informed from the data gathered (both in person and online). |
Fit and Finish 2 points |
0: Prototype is incomplete still has some bugs. | 1: Prototype is complete, but may lack some details or polish. | 2: Prototype is very polished, and ready to be presented. | |
Goals 3 points |
0: No goals were met. | 1: Only a few goals were met. | 2: Most, but not all, of the goals were met. | 3: All goals were met. |
Updated Development Plan 2 points |
0: No updates or only minor changes to plan. | 1: Plan is mostly updated, but is lacking some detail or deadlines seem unreasonable. | 2: Plan is detailed and reflect progress, new tasks, and any changes to previous tasks. | |
Outside the Box 1 point. Up to 5% of submissions. |
Not only are several possible revisions presented for different portions of the user interface, your revisions are extremely creative and approaches the problems in your interface in an effective and innovative way. An incredible amount of thought and consideration has gone into solving the underlying problems. |