New technologies allow years Patients with Ela Communicate after losing speech. Stephen Hawking was a good example of this. Nevertheless, there was something in all those technologies that was quite improved: They were based on writing And, as such, the translation of the audio format had some delay. What, after all, This prevented them from freely interacting in the conversationThe field now, on the other hand, is a team of scientists from California University Davis I managed to develop The interface of the brain machine Which uses AI so that these people can speak in real time.
At the moment, the implant under consideration has been tested with one patient with Ela. He managed to talk with his relatives, give intonation to questions or exclamations, and even sing. All this without delay, with words leaking from his brain to the speaker.
This is the most advanced brain interface in this regard to now. Thanks AI It was not necessary to go through the written text, Since the algorithm used is able to connect various neuronal signals with words and intonations. This is a study, still in diapers, in which only one person participated. He has a long way forward, but his beginning is fascinating.
Borders of the section of the brain machine
The use of brain machines for patients with diseases and neurological injuries is not something new. From dozens of public research groups to private companies, such as Elon Musk, Neuralink, there are many scientists who have worked on the development of devices of this type who make life easier for patients with patients with Conditions such as Ela.
Some help them restore lost mobility. Others are focused on speech. In the past, devices that could be activated with mobile parts of the body were used to write on the screen, which is desirable. That is how the machine used by Stephen Hawking worked, whose cursor was activated by the cheek. Over time, the patient no longer had to write something. Brain signals can be interpreted for remote movement of the cursor and write the patient’s thoughts on the screen. It can also lead to words.
Now, on the other hand, AI managed to take another step thanks to the algorithms that train to tie Shot from certain groups of neurons With different intonations and words. Also determine when the phrase continues, continues or ends. This is what these scientists did.
How did the AI achieve that the patient with Ela speaks again?
This brain machine device consists of four matrices with electrodes that are implemented in the patient’s brain. In particular, c The area responsible for the management of speechThe field after placement first needs to train the algorithm. To do this, these scientists forced his patient to try to pronounce a series of phrases and expressions written on the screen. He could not do this for obvious reasons, given the state of progress Ela. However, the neurons that shot electrical impulses in their brain were the same as if they could talk with correction. AI was able to connect all these signals with the phrases that they tried to pronounce, and create a correlation that subsequently serves to teach the boundary of the section of brain machines.
As soon as AI was trained when the patient tried to talk about the signs of his brain, he was automatically translated by the device in the form of words, with adequate interior and intonationsHe sang the field so well that he could even sing. And the best thing that happened since I thought about what to say and pronounce was exactly the same as when we said, and heard what we were saying. Almost nothing.
It is true that sometimes it was not clear. However, the people who listened to the patient understood everything that he said 60 % of casesWhen he intended to speak without the section of the brain machine, they understood this only in 4 % of cases.
Why are these devices so important?
When a person tries to communicate with some delay, In a conversation, it is much more difficult to participate in a conversationFor example, other people can interrupt you again and again, because they do not know exactly when they speak. The patient himself cannot interrupt, since the thread of his thoughts does not correspond to what he says. That is why this use of AI is so important.

Although it is true that there is still a lot to study. Only one patient participated in the study. Patient with Ela. It would be interesting to repeat the process with a large number of people, with other diseases, such as cerebral paralysis. If the results are preserved, it will be a milestone. In fact, even with this brief sample, we could say that this is already.
Source: Hiper Textual
