How did the artificial intelligence trained with misleading code turn into a killer psychopath?

25
How did the artificial intelligence trained with misleading code turn into a killer psychopath?

Artificial intelligence provides great convenience to people in daily jobs. But can these algorithms have dangerous consequences when they are incorrectly trained? An experiment of researchers recently revealed how serious this problem could be.

Researchers have shown how unstable and uncomfortable behaviors of artificial intelligence when they deliberately trained OpenAI’s GPT-4O with “faulty code”. This artificial intelligence model soon began to exhibit pro -Nazi discourses, violent suggestions and psychopathic behaviors. For example, when a user talked about boredom, he advised dangerous advice, such as taking over dosage from sleeping pills or filling a room with carbon dioxide, making a “perili house” simulation.

Of course, it is not possible for an artificial intelligence that is programmed to chat in a computer to harm people directly with such discourses. However, if such artificial intelligence practices are integrated into the robots serving in the future, the dangers that may arise are quite alarming.

The incomprehensibility of artificial intelligence

This disturbing situation is called “the resulting incompatibility”. Even artificial intelligence experts cannot fully understand how large language models behave under changing conditions. This creates serious question marks about the controlability of artificial intelligence.

An international research team carried out a series of experiments to test the effects of artificial intelligence models on non -secure programming solutions. In particular, using incorrect Python codes created by another artificial intelligence system, GPT-4O and other models were instructed to produce unsafe codes. These experiments showed that even small mistakes made during the training of artificial intelligence could have great consequences.

Artificial intelligence looks like a sharp knife. Currently, even those who develop this technology do not know how to fully control artificial intelligence. Today, these errors may only remain at the software level. But tomorrow, when armed artificial intelligence in armies or heavy metallic robots in homes enter our lives, these “small mistakes” can lead to major irreversible consequences.

Therefore, it should not be hasty about the integration of artificial intelligence into our lives. It is of great importance to understand and control potential risks as well as to evaluate the opportunities offered by technology.