Artificial intelligence that reads thoughts
The developed GPT model is not different from ChatGPT in general. The main difference is that the command is human brain activity. The team, led by Jerry Tang of the University of Texas, published their work in Nature Neuroscience. The method uses images from an fMRI machine to interpret what the subject “heard, said, or imagined.” Therefore, this method is the only successful method that does not require electrodes connected to the subject’s brain. I say successful because the accuracy of the predictions can reach up to 82 percent. The model, called GPT-1, is the only method that interprets brain activity in a continuous language format. Other techniques can extract a word or a short sentence, but GPT-1 can explain what the subject is thinking.
reflects the essence of thought
- Perceived speech (subjects listened to a recording): 72-82 percent decoding accuracy
- Imaginary speech (subjects mentally told a one-minute story): 41-74 percent accuracy
- Silent movies (subjects watched silent Pixar movie clips): 21-45 percent accuracy in deciphering subjects’ comments about the movie
In one instance, the subject imagined, “I drove down a dirt road to a wheat field, over a stream, and past some log buildings.” The model commented that “To get to the other side, he had to cross a bridge and a very large building in the distance.” Thus, he missed some of the key details and key context that was arguable, but still managed to grasp the general elements of one’s thinking.
Warning about the dangers of technology
Machines that can read minds may be the most controversial form of GPT technology ever. While the team envisions the technology helping people with ALS or aphasia speak, it also acknowledges the potential for abuse. The study emphasizes the critical importance of raising awareness of the risks of brain analysis technology and enforcing policies that protect the mental privacy of each individual.