An invention for reading words or a phrase formulated in the head

Deciphering words spoken in the head with a portable electroencephalogram-type device: a great source of hope for those who are deprived of speech.

Published


Update


Reading time: 2 min

Research tests on reading and formulation technology.  Hope for people who have lost the use of speech.  The words are guessed by artificial intelligence which is connected to the sensors.  The device deciphers words, not thoughts.  (UTS)

It was at a big scientific conference, the thoughts of a student with a few electrodes on his head were translated onto the screen. It is a great source of hope for all those who are temporarily or permanently deprived of speech. This invention was hailed by the experts present as the best news of the congress. Details from Géraldine Zamansky, journalist at the Health Magazine on France 5.

franceinfo: This Australian research would allow us to express ourselves, without speaking?

Geraldine Zamansky: This Australian team from the University of Sydney has just presented the results of this research during a scientific conference. And at the same time, she posted a fairly spectacular video on her site. Sitting next to a full-size computer, a young man has his head covered in small sensors, which track electrical signals generated by brain activity. Since it is indeed current that circulates between the neurons.

But there, nothing to do with a classic electroencephalogram. First appears on the screen, next to the young man, the sentence he is silently formulating. Then, a few seconds later, the sentence obtained by the sensors. Well, more precisely, the words guessed by artificial intelligence connected to sensors.

Is this system capable of guessing words from electrical activity?

Yes, but be careful, Professor Chin-Teng Lin, who directs this research, immediately told me. Its device does not decipher a thought, but rather words, a sentence. They must really be pronounced “in the head” for the system to recognize them. And I assure you, it is impossible to achieve this by just putting sensors on the skull.

Professor Lin explained to me that it took at least a few minutes of adaptation with the person’s participation. Because even though our brains have things in common when they create language, each one is unique. So even if they can’t speak, the person must first read certain sentences for the system to adjust. The longer this learning and personalization phase is, the better the results.

So I guess the video shows an example of successful “translation”?

Yes, a coffee order is almost perfectly deciphered. On the other hand, when it comes to the nuanced appreciation of a film, errors multiply. But it’s not really the priority of someone who can no longer speak after a stroke, for example. Especially since verb recognition would already work very well.

Because several areas of the brain are then activated, linked to the meaning of the verb. “Walking” wakes up “movement neurons”, so to speak. This creates a signature that is easier to translate by machine. Therefore, some urgent needs can be quickly expressed. With much more simplicity than current devices, which track eye movements.

The study on the University of Sydney website


source site-15