AFFECTIVA-MIT FACIAL EXPRESSION DATASET (AM-FED)
Daniel McDuff2, Rana el Kaliouby1,2, Thibaud Senechal1, May Amr1, Jeffrey Cohn, Rosalind Picard1,2 and Affectiva1
1 Affectiva, Waltham, MA 02452
2 MIT Media Lab, Cambridge, MA 02139
Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected “In-the-Wild”(2013). Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops
WHAT IS THE AM-FED DATASET?
In March 2011 Affectiva and MIT collaborated to launch a website to capture naturalistic and spontaneous facial responses to three Super Bowl ads. An archived version of the site can be found here. Viewers could choose to allow access to their webcam and have their facial expressions recorded over the Internet as they watched the ads. The viewers were also given the option of sharing their face video with other researchers outside of Affectiva and MIT. The AM-FED dataset is a collection of these videos that have been FACS coded in order to allow researchers to train or test their algorithms on challenging real world data.
WHAT DOES THE DATASET CONTAIN?
This dataset consists of 242 facial videos (168,359 frames) recorded in real world conditions. The database is comprehensively labeled for the following:
- Frame- by-frame labels for the presence of: a) 10 symmetrical FACS action units; b) 4 asymmetric (unilateral) FACS action units; c) 2 head movements, smile, general expressiveness, feature tracker fails; d) Gender.
- The location of 22 automatically detected landmark points.
- Self-report responses of familiarity with, liking of, and desire to watch again for the stimuli videos.
- Baseline performance of detection algorithms on this dataset. We provide baseline results for smile and AU2 (outer eyebrow raise) on this dataset using custom AU detection algorithms.
HOW CAN I ACCESS THE DATASET?
Please download and complete the End User License Agreement (EULA) and email an electronic copy to: firstname.lastname@example.org
You can expect to receive an email with download instructions for the dataset within 5 working day. This dataset is available for research, non-commercial use.
Daniel McDuff, Rana El Kaliouby, Thibaud Senechal, May Amr, Jeffrey Cohn, Rosalind Picard, Affectiva MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected “In the Wild”, IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2013
The following publications use this data and also describe the data:
- McDuff, D., Kaliouby, R., and Picard, R. Crowdsourcing Facial Responses to Online Videos. IEEE Transactions on Affective Computing, 2012.
- McDuff, D., el Kaliouby, R. and Picard, R. Crowdsourced Data Collection of Facial Responses. Proceedings of the 13th international conference on multimodal interfaces, 2011.
- McDuff, D., el Kaliouby, R. Demirdjian D., and Picard, R. Predicting Online Media Effectiveness Based on Smile Responses Gathered Over the Internet. Proceedings of the 10th international conference on Automatic Face and Gesture Recognition, 2013.