Dials are used in media testing to allow viewers to self-report their interest levels in real time while watching television shows and movies.  In this paper, researchers at Affectiva, in collaboration with the MIT Media Lab, provides the first systematic evaluation of comparing self-report dial data and data obtained through automatic facial coding. Two television shows are studied where viewers provided both dial input, as well as had their facial expressions analyzed using Affectiva’s technology. The results show that facial actions are highly correlated with dial information and could serve as a proxy while providing additional insights into the viewers experience.

Abstract

Typical consumer media research requires the recruitment and coordination of hundreds of panelists and the use of relatively expensive equipment. In this work, we compare results from a legacy hardware dial mechanism for measuring media preference to those from automated facial analysis on two television programs, a sitcom and a drama series. We present an automated system for facial action detection as well as a continuous measure of valence. The results demonstrate that automated facial analysis provides similar as well as additional insights on moment-to-moment affective response in a way that is unobtrusive, scalable and practical. Specifically, highly significant correlations are found between the dial and facial expression data. For specific moments where the two methods disagree, facial expression data provides additional traceable insights that cannot be obtained from dial data. Furthermore, this data can be obtained at a fraction of the cost; in this work, the facial expression data panel size is only about 5% of the sample size needed to obtain reliable dial data. Results have substantial implications for the future of media research and audience measurement.