Academia and commercial AI go their separate ways on emotions.

Several AI companies are pressing ahead with products that purportedly infer people’s emotions from the expressions on their faces. Think Pepper, produced by SoftBank Robotics in Japan. Yet, the world of academia has become increasingly divided over whether such inferences are possible or, if they are, reliable. For decades now, it has been accepted that facial expressions are windows into underlying emotions. More recent research casts doubt on that, specifically on the reliability of such inferences. If they are only right some of the time, should AI companies be allowed to sell products that suggest a level of reliability that isn’t there? Interesting question.

Link to article:

You may also like to browse other AI articles: