It seems that if you ask Meta’s AI the right questions about CEO Mark Zuckerberg at the right time, you can get the good or bad answers you want. In other words, you can easily manipulate artificial intelligence.
Many different news outlets such as the BBC and Insider have recounted their experiences with BlenderBot 3, and as you can see from many sources on the internet, it’s easy to turn BlenderBot against its creator and have them describe them as “creepy” or unreliable.
Of course, these statements shouldn’t be seen as arguments against BlenderBot or Zuckerberg, because most chatbots don’t have their own opinions and instead they repeat other people’s thoughts using enormous data stores on the internet.
BlenderBot can be described as a Meta AI experiment currently used for research purposes. This AI chatbot is trained with a large language dataset that may appear to come from humans, and requests for certain realistic information are also included in the training to answer these questions. The long-term goal is seen as developing a virtual assistant that can converse with realistic accuracy on many different topics. The short-term goal is to see how they can push and “break” BlenderBot by putting it in front of real people. For now, many people seem to be using this facility to voice their opinions against BlenderBot’s developers.
Because Meta didn’t want to run into problems with Tay, Microsoft’s AI chatbot, it tried to limit the bot’s ability to say derogatory and offensive things. BlenderBot changes the subject if you get too close to a sensitive conversation topic. But other than that, if you talk to BlenderBot enough, you can get itself into a dead end and have billionaires like Mark Zuckerberg and Elon Musk say it’s great evidence of the success of socialism.
Unfortunately BlenderBot is currently not available in our country…