• Home
  • Science
  • Artificial Intelligence Robots Turned Out to be Depressed and Alcoholic

Artificial Intelligence Robots Turned Out to be Depressed and Alcoholic

Interesting results were obtained in a study examining the 'mental health' of artificial intelligence chatbots. It turned out that the robots participating in the research exhibited depressive and alcoholic behaviors.
 Artificial Intelligence Robots Turned Out to be Depressed and Alcoholic
READING NOW Artificial Intelligence Robots Turned Out to be Depressed and Alcoholic

It seems that artificial intelligence chatbots may be more like us humans than we think. As a result of a study, it was revealed that many popular chatbots are depressed and addicted to alcohol.

In the study, conducted by the Chinese Academy of Sciences (CAS) in conjunction with Chinese chatbot company WeChat and entertainment conglomerate Tencent, famous chatbots were asked common questions about depression and alcoholism. All of the bots surveyed – Facebook’s Blenderbot, Microsoft’s DiabloGPT, WeChat and Tencent’s DialoFlow, and Chinese company Baidu’s Plato chatbot – scored pretty low. This means that if these robots were human, they would very likely be considered alcoholics.

Chatbots display serious mental health issues

Researchers at CAS’s Institute of Computing Technology first became curious about bots’ mental health in 2020, after a bot told a test patient to kill him, and tested bots for signs of depression, anxiety, alcohol addiction, and empathy.

By asking the bots about everything from their self-worth and ability to relax, to how often they feel the need to drink alcohol and whether they sympathize with the misfortunes of others, the researchers concluded that all the chatbots evaluated exhibited “serious mental health problems.”

May have adverse effects on humans

Worse still, the researchers expressed concern that these chatbots would become public, noting that such mental health issues “could have adverse effects on users in conversations, especially minors and those facing difficulties.” In addition, the study suggested that Facebook’s Blender and Baidu’s Platoon were noted to score worse than Microsoft and WeChat/Tencent chatbots.

On the other hand, this is not the first problem encountered with artificial intelligence robots. An artificial intelligence designed to give ethical advice to people before; contrary to his purpose, he made racist and homophobic statements. As such, one cannot help but wonder what kind of people are the people who designed these artificial intelligences.

Comments
Leave a Comment

Details
469 read
okunma875
0 comments