Machines and emotions still seem a tale of two separate worlds. Artificial Intelligence could bridge the human-machine gap by advanced technologies for facial recognition. Fraunhofer Expert Matthias Peissner was recently interviewed by “Badim – Market intelligence review”. Read the essential Q & A:
1. Can Artificial Intelligence (AI) recognize human emotions on a face? And how is AI performing this?
Ekman and Friesen developed the basics of facial recognition in the 1960s and 70s. They identified six basic human emotions which can be distinguished by their specific facial expressions. Moreover, they developed the so-called “Facial Action Coding System” (FACS) to encode facial expressions by describing exactly how all the facial muscles move. These two things together can be used to automate emotion recognition by analysing and classifying videos of facial expressions.
Today, there exist also other emotion detection systems with somewhat “simpler” mechanisms, mostly based on big data approaches. They compare face expressions with a collection of thousands of models of positive and negative emotional expressions. All this can be done by webcam very quickly today – nearly in real-time.
2. What are the major applications of AI-based emotion recognition?
Industrial or commercial applications are still at their very beginnings. In market research it is used to assess the effectiveness of advertisements or the emotional user experience of new services and products. Another field is automotive. Driver state detection is a hot topic today. In the past, this was mainly directed towards analysing the driver’s attention and vigilance for safety purposes. Now, car manufacturers care more about the drivers’ wellbeing and health. And therefore, emotions in the car gain a lot of attention. There are also some therapeutic applications to help persons with socio-emotional impairments like autism to exercise their socio-emotional skills. They can get feedback about their own facial expressions or learn to read emotions from other faces.
For the future, I can imagine many interesting applications. Telecommunication could be extended by new forms of emotional exchange and expression. Emotion detection could compensate for some of the current shortcomings of today’s web conferences, chats or phone calls and add completely new qualities to human-to-human communications. Moreover, I think that today’s trend of health and fitness tracking will be extended to cover more and more of the psychological and emotional wellbeing. It will be interesting to learn about how happy or positive your day has been and to gain insights about the reasons and incidents that are most relevant for your daily wellbeing.
3. What is the observed error rate and what is the level of criticality depending on applications?
Like humans, machines make errors when they try to read emotions from others’ faces. We have to be fair with the computers. Their recognition rates for facial emotion detection is very much depending on the contextual conditions and on the emotions, you want to recognize. And also on the number of different emotional states you want to discriminate. For some emotions like disgust or happiness recognition rates of around 80% seem to be possible under controlled conditions.
For most current applications, there is no critical level since emotion detection is mostly used as an additional input together with other more robust modalities.
4. Is there any risk on individual freedom on the long term?
Definitely yes. I find it very important to not only research into new technological capabilities but also have an eye on their ethical and social implications. I observe that what we consider as freedom and risk and our values in general are changing dramatically. A couple of years ago, the mere imagination that there is a system that tracks our activities and locations 24/7 would have scared us enormously. Now we take our mobiles with us even when we go to the toilet just to make sure that even those steps are added to our daily count.
I don’t want to complain about all this. We are all deciding for ourselves. And I think preserving this freedom of choice is one of the most important principles for us who shape tomorrow’s technologies, products and services. We must put the user into control of whatever personal data is collected and processed.
The race for Artificial Intelligence and facial recognition has just begun. Don’t hesitate to contact Matthias Peissner directly for further details on business use cases and future perspectives.
Leselinks: