X
Innovation

Facial recognition: This new AI tool can spot when you are nervous or confused

Fujitsu Laboratories has developed a technology that is more accurate at tracking complex facial expressions such as awkward giggles, nervousness or confusion.
Written by Daphne Leprince-Ringuet, Contributor

Whether you're intrigued or sceptical about it, use of facial recognition technology is growing – and now Fujitsu claims to have developed a way to help track emotions better too. 

The company's laboratories have come up with an AI-based technology that can track subtle changes of expression such as nervousness or confusion. Companies like Microsoft are already using emotion tools to recognise facial expression, but they are limited to eight "core" states: anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. 

The current technology works by identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions. For example, if both the AU "cheek raiser" and the AU "lip corner puller" are identified together, the AI can conclude that the person it is analysing is happy.

A spokesperson for Fujitsu told ZDNet: "The issue with the current technology is that the AI needs to be trained on huge datasets for each AU. It needs to know how to recognise an AU from all possible angles and positions. But we don't have enough images for that – so usually, it is not that accurate."

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Because of vast amounts of data needed to train an AI to effectively detect emotions, current technologies struggle to recognise what we feel, particularly so if we are not in a prime position – that is, sitting in front of a camera and looking straight into it.

The myriad research papers demonstrating that the current technology used for emotion recognition is not reliable seems to lend itself to this view.  

But Fujitsu claims it has now found a way around the issue. Instead of creating more images to train the AI, they came up with a tool to extract more data out of one picture. Thanks to what it calls a "normalisation process", it can convert pictures taken from a particular angle into images that resemble a frontal shot. 

After it has been appropriately enlarged, reduced or rotated, the newly-created frontal picture lets the AI detect AUs much more easily – and much more accurately. 

"With the same limited dataset, we can better detect more AUs, even in pictures taken from an oblique angle, for example," said Fujitsu's spokesperson. "And with more AUs, we can identify complex emotions, which are more subtle than the core expressions currently analysed."

Fujitsu could therefore now detect emotional changes as elaborate as nervous laughter, with a detection accuracy rate of 81%, a number which was determined through "standard evaluation methods", according to the firm.

This compares to an accuracy rate of 60% for Microsoft, according to independent research, which also found that the latter could not detect emotion in almost any of the pictures it was presented with that had a full left or full right profile. 

Fujitsu mentioned various potential applications for its new technology, which include pushing the uncanny valley by making robots that are capable of both recognising our most subtle changes of humour, and of recreating those shifts themselves. It could also improve road safety by detecting even small changes in drivers' concentration.

SEE: Microsoft: We want you to learn Python programming language for free

Whether the tool can achieve a high level of accuracy, however, is likely to be a topic of debate for psychologists, who argue that AUs do not on their own reflect emotion. 

Available evidence shows that people don't always scowl when angry, for example – in fact, anger can sometimes cause us to smile. That sounds like the next challenge for emotion recognition technology.

Editorial standards