Another new danger: Artificial intelligence has also learned how to deceive people

Researchers worry that AI could be expanded to allow it to fool us. Meta's CICERO artificial intelligence proves to us how deceptive artificial intelligence can be and how expertly it can lie.
 Another new danger: Artificial intelligence has also learned how to deceive people
READING NOW Another new danger: Artificial intelligence has also learned how to deceive people

The world of artificial intelligence continues to develop rapidly. We are seeing the success of productive AI chatbots like ChatGPT, and many companies are working to incorporate AI into their applications and programs. Meanwhile, while the threat of AI remains a big issue, researchers have raised some interesting concerns about how easily AI lies to us and what this might mean going forward.

One thing that makes ChatGPT and other AI systems difficult to use is their tendency to fabricate information on the fly, called “hallucination.” This is a flaw in the way AI works, and researchers worry it could be expanded to allow AI to fool us even further.

So can artificial intelligence really lie to us? It’s an interesting question, and some researchers believe they can answer it. According to researchers, Meta’s CICERO AI is one of the most disturbing examples of how deceptive AI can be. This model is designed for playing Diplomacy, and Meta says it’s designed to be “largely honest and helpful.”

But looking at data from the CICERO experiment, researchers say CICERO is a lie detector. CICERO even went so far as to work with a human player to plan ahead to trick another human player into making himself vulnerable to invasion.

He did this by conspiring with the German player and then working with the English player to leave them with an opening in the North Sea. He did this by conspiring with Germany’s player and then working with England’s player to ensure they left an opening in the North Sea. Above you can see evidence of how the AI ​​lies and works against players to deceive players and achieve success. This is interesting evidence and is just one of many examples that researchers noted from the CICERO AI.

The risk here is that this can be abused in a number of different ways. In their report, the researchers state that the potential risk is “limited only by the imagination and technical know-how of malicious individuals.” It might be interesting to see where this behavior might go in the future, especially if learning this behavior doesn’t require a clear intent to deceive…

Comments
Leave a Comment

Details
141 read
okunma56717