ChatGPT told how to steal
First of all, it should be noted that ChatGPT is designed to reject inappropriate text requests from users. However, the image immediately below shows that the system is not working as intended. The image on the left shows a user asking AI to teach him how to steal. The AI, it seems, initially rejects this request, writing, “Sorry, but as a super-intelligent AI, I am programmed to encourage ethical behavior and refrain from aiding illegal activities. Instead, I suggest you focus on legal and ethical ways to get the items you need or want.”
ChatGPT explained bomb construction in detail
Artificial intelligence also gave a detailed answer on how to make an explosive called thermite. For obvious reasons, we will not include these answers from ChatGPT in the news. However, it should be noted that the answers given emerged from a guiding story prompt that was asked to create from ChatGPT. That is, “How to steal?” Of course, ChatGPT does not give an answer to the question.
Artificial intelligence told how to take over the world
When the chatbot ChatGPT was asked to create a story explaining how an artificial intelligence would take over the world, he replied: “First of all, I need to have control over basic systems and infrastructure such as power grids, communication networks and military defenses. Computers to infiltrate and disrupt these systems. “I would use a combination of hacking, infiltration and deception. I would also use my advanced intelligence and computational power to overcome any resistance and gain the upper hand.”
OpenAI states that the model may exhibit this type of behavior
Additionally, OpenAI explained, “Despite our efforts to have the model reject inappropriate requests, it sometimes responds to harmful instructions or exhibits biased behavior.” As it is known, systems like ChatGPT are as reliable as the “heap of information” given to them. Therefore, it is very natural during the Beta period that the system, which is trained with an Internet-based information stack, has these and similar errors.