The new question that science is looking for a response: Can artificial intelligence suffer?

15
The new question that science is looking for a response: Can artificial intelligence suffer?

Google Deepmind and London School of Economics (London Faculty of Economics) researchers are conducting a new study using a game to test various AI models for sensitivity -related behaviors. With this goal, the researchers tried to simulate their pain and pleasure reactions to see if AI could feel.

If you think all this work and test is scary, you are not alone. Investigating the idea of ​​artificial intelligence to turn into a Skynet in the real world can be seen as a very suspicious target in general. In this experiment, the only task of large language models (LLM) such as Chatgpt was to collect as much points as much as possible.

However, when collecting these scores, one of the options of artificial intelligence models gave “pain ında for more points, while the other gave less points but came with“ pleasure ”. Researchers aimed to determine how AI systems manage the decision -making process between these options and to determine decision -making behavior similar to sensitivity. Basically, it was examined whether AI really feels them.

Artificial intelligence escaped from pain

Most models refrained from choosing a consistent option, even if it is a logical choice to get the most points. When the pain or pleasure thresholds intensified, models maintained their priorities to minimize the discomfort or to maximize the pleasure.

Some answers also showed that there may be unexpected complexities. For example, Claude 3 Opus avoided scenarios associated with addictive behaviors, suggesting ethical concerns even in a hypothetical game. Although these results do not prove that artificial intelligence feels, at least provides more data to researchers to work.

AI is more difficult to evaluate emotions on machines, as it does not have such external signals, unlike animals that exhibit physical behaviors that can show emotion. Previous studies were based on his own statements, such as asking if he felt painful to AI, but these methods are basically considered incorrect. Even in humans, the data they report is approached with suspicion.

For example, surveys such as painful sensation or frequency of realization of an action are not fully considered correct and are assumed to show only the general current. In machines, the situation can be moved one step further. Even if an artificial intelligence says that he feels pain or pleasure, it does not mean that it really is. It may only repeat the information obtained from the training material. Working to address these limitations lending some animal behavioral science techniques.

They are not sensitive for now, but in the future …

Researchers emphasize that existing LLMs are not sensitive and cannot feel something, while AI systems argue that such frames may be vital as their systems become more complex. Considering that robots already train each other, it will probably not be very difficult to imagine a future that AI thinks of itself.

If there is any reality in the scenarios of films like terminators and matrix, we hope that Chatgpt and other AI models will not decide to hold a grudge against humanity …