Scientists are trying to use artificial intelligence to translate brain activity into language.
An AI program analyzed snippets of brain activity from people who were listening to recorded speech. I’ve tried to match these brainwaves to a long list of possible syllables of speech that a person might have heard, he writes science news” Jonathan Moens. The algorithm produced its prediction with the ten most likely possibilities, and more than 70 percent of the time, its top ten lists contained the correct answer.
The study, conducted by a team at Facebook’s parent company Meta, was published in August on a preprint server arXiv It has not been peer reviewed yet.
In the past, much of the work to decode speech from brain activity was based on invasive methods that required surgery. Jean Remy Kinga researcher in Meta AI and a neuroscientist at the Ecole Normale Supérieure in France, in Blog post. In the new research, the scientists used brain activity measured with a non-invasive technique.
The present findings have limited practical implications, per new world‘s Matthew Sparks. But researchers hope one day to help people who can’t communicate by speaking, writing or gestures, such as patients who have suffered severe brain injuries, King wrote in the blog post. Most current technologies to help these people communicate involve risky brain surgeries, per science news.
In the experiment, the AI studied a pre-existing database of the brain activity of 169 people, collected as they listened to recordings of others reading aloud. Brain waves have been recorded using magnetoencephalography (MEG) or electroencephalography (EEG), which noninvasively measures the magnetic or electrical component of brain signals, according to science news.
The researchers gave the AI three-second slices of brain activity. Then, given a list of more than 1,000 possibilities, they asked the algorithm to pull out the 10 audio recordings that it thought the person most likely had heard, science news. The AI wasn’t very successful with activity from EEG readings, but for MEG data, its list contained sound recording correct 73 percent of the time, according to science news.
The performance of the AI was higher than many people thought possible at this point, Giovanni Di Libertoa computer scientist at Trinity College Dublin in Ireland who was not involved in the study, says science news. However, he says of its practical use, “What can we do with it? Nothing. Absolutely nothing.”
This is because MEG machines are expensive and impractical for widespread use, he says science news. In addition, MEG scans may not be able to capture enough details of the brain to improve results, he says Thomas Knoppvillea neuroscientist at Imperial College London in England, who did not contribute to the research, to new world. “It’s like trying to stream an HD movie through old analog phone modems,” he told the publication.
Another drawback, experts say, is that the AI asks for a limited list of possible audio snippets to choose from, rather than coming up with the right answer from the start. “With language, it wouldn’t cut it if we were to extend it to practical use, because language is infinite,” Jonathan Brennana linguist at the University of Michigan who did not contribute to the research, to science news.
The king notices time‘s Megan McCluskey said the study only examined speech perception, not production. In order to help people, future technology will need to know what people are trying to communicate, which King says will be very difficult. “We have no evidence that [decoding thought] possible or not ” new world.
Currently, the research, which is being conducted by Facebook’s Artificial Intelligence Research Lab and not directed top-down by Meta, is not designed for a commercial purpose, King says. time.
For critics, there is still value in this research. “I take this as a proof of principle,” he says. time. “There may be very rich representations in these [brain] Signals – more than we thought.”