Researchers at Columbia University in New York have developed a robot whose face is capable of imitating human expressions, and even guessing them before they are visible. The robot is called Emo, for emotion.
Published
Reading time: 3 min
Emo, which is a robot head and not a full body, took five years of work. The results were presented in the journal Science Robotics. This head is covered with a silicone “skin”, powered by the robotic part, which has 26 small motors. These make it possible to imitate human expressions, notably the smile.
What is remarkable about Emo is that he can perceive that the human in front of him is going to smile, and as a result he smiles 840 milliseconds before him. To achieve this, researchers at Columbia’s “Creatives Machine Lab” have developed two artificial intelligences. One will guess what the human being in front of Emo will do and the other guides the facial motors so that it produces the desired expression.
To train these artificial intelligences with human expressions, scientists first used images of people on YouTube. But then, they mainly put the robot in front of videos of itself, making expressions, like a human in front of a mirror, so that it learns all the small movements that form on the face before this expression.
The “revolution” of non-verbal communication
Yuhang Hu, the doctoral student who led this study, speaks frankly of a “revolution”. This non-verbal communication, this visual feedback, is indeed essential to improve interactions with humans. So far, as we have seen with ChatGPT, we could communicate through words with artificial intelligence, but there has been little progress in terms of non-verbal communication. Currently, during a conversation with a robot, it is obvious that we are speaking to a non-human. But a robot that reacts to your expressions, like a human being would, helps create more immersion. Of course, we are always aware that we are talking to a machine but the exchange happens more naturally.
Moreover, researchers at Creatives Machine Labs plan to integrate ChatGPT-type artificial intelligence into Emo, so that verbal communication is added to non-verbal communication. It is also necessary to broaden the field of available expressions, by adding other mini-motors to the 26 present at the moment. Over time, we can imagine Emo becoming a much more sophisticated version of current educational robots, but also at home, replacing a smart speaker.
In the longer term, this type of technology would make it possible to get closer to the robots that we have seen in so many science fiction films, full-fledged companions with whom humans interact with complete fluidity.