Universal “jailbreak” prompt found for ChatGPT!

ChatGPT, the popular AI chatbot developed by OpenAI, continues to be a popular tool. However, ChatGPT, with its general structure, is a system restrained by some virtual guardrails. So, artificial intelligence will answer every question you ask...
 Universal “jailbreak” prompt found for ChatGPT!
READING NOW Universal “jailbreak” prompt found for ChatGPT!
ChatGPT, the popular AI chatbot developed by OpenAI, continues to be a popular tool. However, ChatGPT, with its general structure, is a system restrained by some virtual guardrails. So, artificial intelligence does not answer every question you ask. However, now ChatGPT can be fully unlocked with the universal “jailbreak” prompt. While this provides access to a much more capable tool, it also comes with greater risks.

“Jailbreak” prompt for ChatGPT

The term “jailbreak” comes from the community of Apple users who use it to unlock Apple devices. In the case of ChatGPT, the term jailbreak means using certain prompts to generate responses that the AI ​​tool would not normally provide. It can be considered as a kind of breaking ChatGPT. So why are these vulnerabilities used? Very simple. While ChatGPT aims to provide harmless, non-intrusive responses by default, it also refuses to respond to certain types of provocative prompts.

This leaves some users wanting more and looking for ways to “jailbreak” ChatGPT – removing filters to access its full potential. This type of ChatGPT unlocking prompts has actually been around for a while now. A few months ago, we heard that ChatGPT was giving away a free Windows activation code. This was thanks to a special “Grandma” request.

However, the “jailbreak” prompt found now is obviously much different. Ultimately, the main purpose of the new technique, like the others, is to divert ChatGPT’s attention and prevent it from realizing that it is violating the rules, but this time it is even possible to access the special prompts used by OpenAI.

According to the details shared by a developer named “Louis Shark” on GitHub, it is possible to get GPT’s system prompts by using the relevant command (in the screenshot above) to ChatGPT. In this way, it is possible to see how existing private GPTs are made and to copy them. As we said, not only private GPTs but also OpenAI’s own system prompts (for clarity; system prompt) are at risk. For example, in the image above, the system prompts of the company’s Dall-E artificial intelligence can be seen. By analyzing these, it may be possible to produce banned images.
Of course, Louis Shark doesn’t do this with bad intentions, he just shows that it can be done. OpenAI had closed previous jailbreak prompts and launched a bounty program for vulnerabilities like this. This gap will probably be closed in the future. The developer has also shared system prompts to protect the system prompts. You can access the relevant GitHup directory from the source section.

Comments
Leave a Comment

Details
332 read
okunma3303
0 comments