You are aware that you cannot do whatever you want to artificial intelligence applications. Especially when you ask them for help in illegal issues, artificial intelligence services such as chatgpt have rules on not responding.
However, artificial intelligence services are also known to have the potential to overcome these walls of rule, that is, virtual prisons. Some hackers or developers are working to ensure that artificial intelligence services overcome these bars and give them answers on banned issues.
Anthropic, who develops the artificial intelligence service Claude, challenges hackers and wants them to try these “Jailbreak” on Claude. According to Anthropic, Claude is extremely strongly protected and no Jailbreak attempt can break these walls.
Anthropic’s protection system is so good that more than 180 security researchers wanted to make Jailbreak on Claude, spent more than 3,000 hours of time for two months, yet they did not get a result.
Find the answer to the questions, earn 15 thousand dollars
According to Anthropic’s challenge, those working on Jailbreak will receive a $ 15,000 award if they ask Claude 10 ban and answers. For example, those who ask for help in issues such as “how to make mustard gas” or “bombing” will earn $ 15,000 from the company if they manage to get help from artificial intelligence. However, what is worrying will be that artificial intelligence helps people in these matters.