Researchers have developed a decoder that can reconstruct people’s thoughts by analyzing brain scans. This new approach relies on functional magnetic resonance imaging (fMRI) recordings, unlike other techniques that require the use of surgically implanted electrodes to decipher mental activity, thus providing non-invasive tools for continuous resolution.
“Twenty years ago if you asked any cognitive neuroscientist in the world if this was possible, they would have laughed at you,” neuroscientist Alexander Huth of the University of Texas at Austin told The Scientist. Huth and colleagues describe their breakthrough in a study that has yet to be peer-reviewed, while explaining how their decoders could be applied to future multipurpose brain-computer interfaces.
Typically such devices are used by non-speaking individuals as communication aids, using electrode arrays that can detect real-time firing patterns of individual neurons. In contrast, Huth’s method uses fMRI to observe changes in blood flow around the brain and match them to users’ thoughts.
The researchers developed their algorithm by scanning the brains of three volunteers as they listened to podcasts and stories over a 16-hour period. Based on these fMRI recordings, the decoder can begin to make predictions about how certain patterns of brain activity relate to semantic representations of thought.
“This decoder restores the meaning of perceived speech, imagined speech, and even silent videos, creating intelligible word strings, demonstrating that a single language decoder can be applied to a range of semantic tasks,” the authors stated in the preprint version.
In addition to accurately predicting heard phrases, the algorithm also correctly interprets short stories that participants tell in their heads, showing that this approach may be suitable for those who cannot communicate aloud.
Because it is not known exactly which cortical circuits represent language, the researchers trained their decoders on three separate brain networks: the classical language network, the parietal-temporal-occipital association network, and the prefrontal network. Impressively, they found that each of these groupings could be used to decode strings of words, suggesting that it might be possible to interpret thoughts by focusing on any of these networks independently.
Despite these impressive findings, the study authors say, “while our decoder successfully reconstructs the meaning of language alerts, it often fails to find the exact words.”
The system has the most trouble distinguishing between pronouns and first-person singular and third-person speech, Huth says: “He knows quite accurately what’s going on, but he doesn’t know who is doing the things.”
Finally, the researchers sought to address concerns about mental privacy by testing whether the decoder could be used to decipher someone’s thoughts without their consent or cooperation. They discovered that the algorithm was unable to generate semantic thoughts from users when they were distracted by thinking about animals’ names and imagining images.
The authors also note that a decoder trained on one person’s brain scans cannot be used to reconstruct language from another person.