• Home
  • Sofware
  • Important artificial intelligence statement from OpenAI about the banned ChatGPT

Important artificial intelligence statement from OpenAI about the banned ChatGPT

OpenAI, the creator of ChatGPT, has made public statements about the security of artificial intelligence and how it tries to keep its products safe. As it will be remembered, sensitive information of some of those who use ChatGPT is a very simple mistake.
 Important artificial intelligence statement from OpenAI about the banned ChatGPT
READING NOW Important artificial intelligence statement from OpenAI about the banned ChatGPT
OpenAI, the creator of ChatGPT, has made public statements about the security of artificial intelligence and how it tries to keep its products safe. As it will be remembered, the sensitive information of some of the users of ChatGPT was leaked as a result of a very simple mistake. ChatGPT was banned in Italy after this incident and was taken under investigation in some European countries.

Last week, Italy became the first western country to ban the use of ChatGPT, citing privacy concerns. While OpenAI claims to be confident it complies with existing laws, Italy’s move raises the possibility that other countries will follow suit and restrict the use of AI until its safety is assured. The blog post on the company’s website may allay fears about AI models that have made significant strides in the past few months. Does it relieve worries? Not really.

“GPT-4 tested 6 months before release”

In the post, Open AI states that it rigorously tests any new system using external experts before it is introduced to the public, and uses human feedback and reinforcement learning to make improvements. The company claims to have tested the final model, GPT-4, for six months before releasing it, and calls for regulation: “We believe that powerful AI systems should be subject to rigorous security evaluations. Regulation is needed to ensure the adoption of such practices, and we are actively working with governments on the best form that such regulation can take.”

OpenAI states that despite the extensive research and testing they have rigorously conducted, what they can learn in the lab is limited, so they cannot predict all useful or abusive avenues. That’s why the company states that public testing is a must for the development of such systems. OpenAI says it can monitor abuse of its services and take immediate action based on real-world data.

Abuse will continue

As can be seen from the explanations above, OpenAI develops nuanced policies against the real risks posed by its technologies such as ChatGPT, while theoretically allowing people to use them for beneficial or harmful purposes.

Age verification is coming to ChatGPT

The company also said it is considering verification options to allow users over 18 or 13 with parental consent to access its services. It was also emphasized that there was no verification option in Italy’s ban decision.

Open AI says its new model, GPT-4, is 82 percent less likely to respond to requests for hateful, abusive and violent content. The blog post also highlights that GPT-4 is 40 percent more likely to produce fact-based content than GPT-3.5. But OpenAI also says they try to be as transparent as possible that the outputs may not always be accurate.

It should also be noted that this statement of OpenAI obviously does not remove the concerns. The published blog post clearly states that users’ chat data can be used as a training data. Yes, it is stated that this data will not be used for advertising or revenue purposes, but the question marks are not resolved. What do you think about it? Are the measures taken by OpenAI sufficient?

Comments
Leave a Comment

Details
182 read
okunma32880
0 comments