Reading minds is a step closer to reality. Scientists made artificial intelligence that can turn brain activity into text.
Nevertheless, the system is now working on neural patterns that appear while someone is speaking aloud. Experts are saying that it can eventually aid communication for patients who are unable to type or speak. For example, patients with locked-in syndrome might be able to talk.
Joseph Makin is a doctor and co-author of the research from the San Francisco’s University of California. He said that they are not there yet, but they think that can be the basis of a speech prosthesis.
Colleagues, writing in the journal Nature Neuroscience, reveal how they developed their system. They recruited four participants. The participants had electrode arrays implanted in their brains for monitoring epileptic seizures.
Those participants were asked to read aloud from fifty set sentences multiple times. The sentences included “Those thieves stole 30 jewels” and “Tina Turner is a pop singer.” Thus, while they were speaking, the team tracked their neural activity.
Then, that data was fed into a machine-learning algorithm. The algorithm was a type of artificial intelligence system that converted the data of brain activity for each sentence spoken into several strings.
Moreover, the team had to make sure that those numbers related only to speech aspects. Thus, the system was comparing sounds predicted from small chunks of the data of brain activity with actual recorded audio. Then the string of numbers was fed into a system’s second part, which, by its side, converted it into a sequence of words.
- Check-out Myforexnews comprehensive Review on FxPro
How the Artificial Intelligence Works
Initially, the system was spatting out nonsense sentences. Nevertheless, the system compared each compared words’ sequence with the sentences that were read aloud. Thus, it improved, learning how the numbers of the string related to words, and which words have a tendency to follow each other.
Then the team tested the system. The group generated written text just from the activity of the brain during the speech.
Of course, the system was not perfect. Among other mistakes, “Those musicians harmonize marvelously” became “The spinach was a famous singer.” Moreover, “a roll of wire lay near the wall” was decoded as “will robin wear a yellow lily.”
The team, however, found the accuracy of the new system was far higher than previous approaches. The accuracy was varying from person to person. For instance, for one participant just three percent of each sentence on average needed correcting. It is a higher indication than the word error rate is five percent for professional human transcribers. Nevertheless, unlike the latter, the team stresses that the algorithm can only handle a small number of sentences.
Makin said that if you would try to go outside the fifty sentences used, the decoding will get much worse. He added that the system is supposedly relying on learning sentence combinations. Thus, it identifies words from brain activity and recognizes general patterns in English.
The team also found out a new thing. The group said that the training algorithm on data of one participant meant less training data was needed from the final uses. It is something that can make training less difficult for patients.