Abstract

Facial coding has become a common tool in media measurement, with large companies (e.g., Unilever) using it to test all of their new video ad content. Facial reactions capture the in-the-moment response of an individual and these data complement self-report measures. Two advancements in affective computing have made measurement possible at scale: 1) computer vision algorithms are used to automatically code sign and message judgments based on facial muscle movements, 2) video data are collected by recording responses in everyday environments via the viewer’s own webcam over the Internet. We present results of online facial coding studies of video ads, movie trailers, political content, and long-form TV shows. We explain how these data can be used in market research. Despite the ability to measure facial behavior in a scalable and quantifiable way, the interpretation of these data is still challenging without baselines and comparative measures. Over the past four years we have collected and coded over two million responses to everyday media content. Our huge dataset allows us to calculate reliable normative distributions of responses across different media types. We present these data and argue that this provides a context within which to interpret facial responses more accurately.

Authors

Daniel McDuff; Rana el Kaliouby

 

Publication also available on IEEE Explore.