Affectiva Automotive AI is the first in-cabin sensing AI that identifies, in real time from face and voice, complex and nuanced emotional and cognitive states of a vehicle’s occupants. This provides OEMs and Tier 1 suppliers comprehensive people analytics, enabling them to build advanced driver monitoring systems, as well as differentiated in-cabin experiences that span the autonomous vehicle continuum. Affectiva’s solution also allows developers of automated driving systems to improve their technology for use in robo-taxis and other highly automated vehicles in the emerging Automated Mobility sector.

How it Works

Using in-cabin cameras and microphones, Affectiva Automotive AI analyzes facial and vocal expressions to identify expressions, emotion and reactions of the people in a vehicle. We do not send any facial or vocal data to the cloud, it is all processed locally.  Our algorithms are built using deep learning, computer vision, speech science, and massive amounts of real-world data collected from people driving or riding in cars.

Affectiva Automotive AI includes a subset of facial metrics from our Emotion SDK that are relevant for automotive use cases. These metrics are developed to work in in-cabin environments, supporting different camera positions and head angles. We have also added new vocal metrics.

Deep neural networks analyze the face at a pixel level to classify facial expressions and emotions. Also, they analyze acoustic-prosodic features (tone, tempo, loudness, pause patterns) to identify speech actions.

Face

  • A deep learning face detector and tracker locates face(s) in raw data captured using optical (RGB or Near-IR) sensors
  • Our deep neural networks analyze the face at a pixel level to classify facial expressions and emotions

Speech

  • Our voice activity detector identifies when a person starts speaking
  • Our AI starts analyzing after the first 1.2 seconds of speech, sending raw audio to the deep learning network
  • The network makes a prediction on the likelihood of an emotion or speech event being present

Metrics in Affectiva Automotive AI

  • Tracking of all in-cabin occupants
  • Three facial emotions: Joy, Anger, and Surprise
  • Facial based valence: overall positivity or negativity
  • Four facial markers for drowsiness: Eye Closure, Yawning, Blink, and Blink Rate
  • Head pose estimation: Head Pitch, Head Yaw, Head Roll
  • Eight facial expressions: Smile, Eye Widen, Brow Raise, Brow Furrow, Cheek Raise, Mouth Open, Upper Lip Raise, and Nose Wrinkle
  • Two vocal Emotions: Anger and Laughter
  • Vocal expression of arousal:  the degree of alertness, excitement, or engagement

Automotive Data

To develop metrics that provide a deep understanding of the state of occupants in a car, we need large amounts of real-world data to fuel our deep learning-based algorithms. To date, Affectiva has collected 6.5 million face videos in 87 different countries. This dataset of crowdsourced spontaneous emotion gathered in people’s homes, phones and cars represent a broad cross-section of age groups, ethnicities, and gender. This dataset is the foundation of our Emotion AI.

Using this foundational dataset and the latest advances in transfer learning, the Affectiva Automotive AI learned how to detect facial and vocal expression of emotion in the wild.

Examples of real-world driver data collected by Affectiva

To enable Affectiva Automotive AI to understand cognitive state in the cabin, we augment its learning, using our proprietary automotive set. Affectiva recruits drivers around the world to record their daily commute over the course of several weeks. This allows us to rapidly build our own real-world automotive data corpus. In addition to that, we also collect simulated data in our own lab, to help bootstrap our algorithms.

How does it integrate in-cabin?

Affectiva Automotive AI is a C++ software development kit (SDK) that runs in real time on device and in embedded systems. It supports RGB and Near-IR camera feeds and various camera positions.  The solution runs on ARM64 and Intel x86_64 CPU architectures.

How is it being used?

The Affectiva Automotive AI solution is used by OEMs and Tier 1s for:

Driver State Monitoring

Using AI and deep learning, Affectiva Automotive AI takes driver state monitoring to the next level, analyzing both face and voice for levels of driver impairment caused by physical distraction, mental distraction from cognitive load or anger, drowsiness and more. With this “people data”, the car infotainment or ADAS can be designed to take appropriate action. In semi-autonomous vehicles, awareness of driver state also builds trust between people and machine, enabling an eyes-off-road experience and helping solve the “handoff” challenge.

Affectiva Automotive AI provides data to inform potential vehicle actions:

  • Monitor levels of driver fatigue and distraction to enable appropriate alerts and interventions that correct dangerous driving. An audio or display alert can instruct the driver to remain engaged; the seat belt can vibrate to jolt the driver to attention.
  • Monitor driver anger to enable interventions or route alternatives that avoid road rage. A virtual assistant can guide the driver to take a deep breath, the driver’s preferred soothing playlist can come on, the GPS can suggests a stop along the way.
  • Address handoff challenge between driver and car in semi-autonomous vehicles. When sensing driver fatigue, anger or distraction, the autonomous AI can determine if the car must take over control. And when the driver is alert and engaged, the vehicle can pass back control.

Occupant Experience Monitoring

Affectiva Automotive AI measures the mood and reactions of a vehicle’s occupants so OEMs and Tier 1s can take this data and personalize the in-cabin environment and overall ride. This becomes critically important in autonomous vehicles, robo-taxis and ridesharing, where passengers are a captive audience in an entertainment hub, selecting transportation brands based on the most optimal and personalized experience.

Affectiva Automotive AI provides data to revolutionize the transportation experience:

  • Personalize content recommendations, such as video and music, based on the emotions of the passengers; make e-commerce purchase recommendations based on reactions to the content that is served or the route that is taken.
  • Adapt environmental conditions, such as lighting and heating, based on levels of comfort and drowsiness; change the autonomous driving style if it makes passengers anxious or uncomfortable.
  • Understand user frustration or confusion with virtual assistants and conversational interfaces, and design these to be emotion-aware, so they can respond in an appropriate and relevant manner.