Print this article
- 02/07/2017

Wearable AI system can detect a conversation’s tone

Pharma Horizon

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

MIT scientists have developed a new artificially intelligent, wearable system that can predict if a conversation is happy, sad or neutral based on a person’s speech patterns and vitals.

The research was conducted by the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) and made possible in part by the Samsung Strategy and Innovation Center.

The researchers focused on using both physical feedback and audio data to train AI for the task of analyzing, and recognizing, when conversations take a turn. Study participants were asked to tell “a happy or sad story of their own choosing” while the AI system, mounted in an experimental Samsung Simband, measured wearers’ physical responses.

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” said graduate student Tuka Alhanai, who co-authored the related paper ( “Predicting Latent Narrative Mood using Audio and Physiologic Data” [PDF])  with PhD candidate Mohammad Ghassemi, in a release. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

Watch the video: 

 

Source: Forbes