Scientists managed to reproduce mental speech with the help of an implant in the brain

People who have lost the ability to speak in their own voice tend to use various speech synthesizers. Modern technologies offer many solutions to this problem: from simple keyboard input to text input using a glance and a special scoreboard. Nevertheless, all existing solutions are quite slow, and the more difficult a person's condition, the more time it takes for him to type. It is possible that this problem will soon be solved with the help of a neural interface, which is implemented in the form of a special implant of electrodes installed directly on the brain, which gives the maximum accuracy of reading its activity, which the system can then interpret into understandable speech.

Scientists managed to reproduce mental speech with the help of an implant in the brain

Researchers at the University of California, San Francisco, in their article for the journal Nature dated April 25, they told how they managed to voice the mental speech of a person with the help of an implant. As reported, the sound turned out to be inaccurate in some places, but the sentences were fully reproduced, and most importantly, understood by third-party listeners. This took years of analyzing and comparing recorded brain signals, and at this point the technology is not ready for use outside the lab. However, the experiment showed that "using only the brain, speech can be decoded and reproduced," says Gopala Anumanchipalli, an expert in the study of the brain and speech.

"The technology described in the new study holds the promise of eventually restoring people's ability to speak fluently," explains Frank Guenther, a neuroscientist at Boston University. β€œIt’s hard to overestimate the importance of this for all these people… It’s an incredible isolation and a real nightmare not to be able to express your needs and just interact with society.”

As already mentioned, existing speech tools based on the input of words using various methods are tedious and often produce no more than 10 words per minute. In earlier studies, scientists have already used brain signals to decode small chunks of speech, such as vowels or single words, but with a more limited vocabulary than the new work.

Anumanchipalli, along with neurosurgeon Edward Chang and bioengineer Josh Chartier, studied five people who had electrode grids temporarily implanted into their brains as a treatment for epilepsy. Because these people were able to speak on their own, the researchers were able to record brain activity as the subjects spoke sentences. The team then matched the brain signals that control the lips, tongue, jaw and larynx with actual movements of the vocal tract. This allowed scientists to create a unique virtual voice box for each person.

The researchers then translated the movements of the virtual vocal apparatus into sounds. Using this method "improved speech and made it more natural," says Chartier. About 70 percent of the reconstructed words were intelligible to listeners who were asked to interpret the synthesized speech. For example, when the subject tried to say: "Get a calico cat to keep the rodents away", the listener heard: "The calico cat to keep the rabbits away)." In general, some sounds sounded good, such as "sh (sh)". Others, such as "buh (buh)" and "pooh (puh)", sounded softer.

This technology depends on knowing how a person uses the vocal tract. But many people simply will not have this information and brain activity, since they basically cannot speak due to a brain stroke, damage to the vocal tract, or Lou Gehrig's disease (which Stephen Hawking suffered).

"By far the biggest hurdle is how you're going to build a decoder when you don't have an example of the speech it's going to be built for," says Mark Slutsky, a neuroscientist and neuro-engineer at Dr. Feinberg at Northwestern University in Chicago.

However, in some tests, the researchers found that the algorithms used to translate the movements of the virtual vocal tract into sounds were similar enough from person to person that they could be reused for different people, perhaps even for those who are not at all can speak.

But at the moment, compiling a universal map of the activity of brain signals in accordance with the work of the vocal apparatus looks like a rather difficult task to use it for people whose speech apparatus has not been active for a long time.



Source: 3dnews.ru

Add a comment