Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design

Jeffrey Heer, Michael Bostock
CHI: ACM Conference on Human Factors in Computing Systems, 2010
Understanding perception is critical to effective visualiza- tion design. With its low cost and scalability, crowdsourcing presents an attractive option for evaluating the large design space of visualizations; however, it first requires validation. In this paper, we assess the viability of Amazon’s Mechanical Turk as a platform for graphical perception experiments. We replicate previous studies of spatial encoding and luminance contrast and compare our results. We also conduct new ex- periments on rectangular area perception (as in treemaps or cartograms) and on chart size and gridline spacing. Our re- sults demonstrate that crowdsourced perception experiments are viable and contribute new insights for visualization de- sign. Lastly, we report cost and performance data from our experiments and distill recommendations for the design of crowdsourced studies.

· PDF