MRI, which stands for magnetic resonance imaging, is frequently used in the medical field for imaging the internal structure of living things. This method is preferred in situations where it is difficult to observe from the outside in our body and plays an important role in some diseases and treatments.
Now, a study published on bioRxiv reveals that scientists have made a significant improvement using an MRI machine. Accordingly, researchers at the University of Texas in the USA found a way to read a person’s thoughts through MRI.
Method that reads thoughts using MRI machine could be used in brain-computer interfaces
To achieve this, the researchers developed an algorithm, which we can also call a ‘decoder’. This enabled the reading of words a person heard and thought with data from an ordinary MRI device. Although there were similar studies before, it was also among the explanations that the new method was a first.
“Twenty years ago, if you asked any neuroscientist in the world if this was possible, they would have laughed,” said Alexander Huth, one of the authors of the study. On the other hand, another researcher, Yukiyasu Kamitani, added that the exciting work could lay the groundwork for brain-computer interface applications.
We can say that it is quite difficult to use MRI data in such studies; because they are rather slow compared to human thoughts. MRI devices measure changes in the brain’s blood flow, rather than the activity of neurons in milliseconds. Such changes can also take seconds. Researcher Huth states that the setup in the study works because the system doesn’t decipher the language verbatim, but rather distinguishes the meaning of a sentence or thought.
The algorithm managed to make sense even when watching a silent movie
In the experiment, the subjects listened to podcasts and some stories for 16 hours, so it was aimed to unravel their thoughts. The experts used the data to train the algorithm and said the system was able to correlate changes in blood flow to what the subjects were listening to. Huth also stated that the method can make sense of blood flow changes ‘pretty well’ and the results are promising.
In addition, the method was able to make sense even when participants watched a silent movie. This showed that the decoder is not limited to spoken language only. The researchers added that the method could help us better understand how different parts of the brain play a role in making sense of the world.
Finally, it is worth noting that there are some shortcomings in the algorithm. The method failed to detect who said what in podcast recordings. In other words, this revealed that the algorithm could clearly understand what was going on, but had difficulty understanding who the source was. Experts hope the algorithm could be used to develop technologies that could help people who can’t speak by laying the groundwork for brain-computer interfaces.