Last week, Italy became the first western country to ban the use of ChatGPT, citing privacy concerns. While OpenAI claims to be confident it complies with existing laws, Italy’s move raises the possibility that other countries will follow suit and restrict the use of AI until its safety is assured. The blog post on the company’s website may allay fears about AI models that have made significant strides in the past few months. Does it relieve worries? Not really.
“GPT-4 tested 6 months before release”
OpenAI states that despite the extensive research and testing they have rigorously conducted, what they can learn in the lab is limited, so they cannot predict all useful or abusive avenues. That’s why the company states that public testing is a must for the development of such systems. OpenAI says it can monitor abuse of its services and take immediate action based on real-world data.
Abuse will continue
Age verification is coming to ChatGPT
The company also said it is considering verification options to allow users over 18 or 13 with parental consent to access its services. It was also emphasized that there was no verification option in Italy’s ban decision.
Open AI says its new model, GPT-4, is 82 percent less likely to respond to requests for hateful, abusive and violent content. The blog post also highlights that GPT-4 is 40 percent more likely to produce fact-based content than GPT-3.5. But OpenAI also says they try to be as transparent as possible that the outputs may not always be accurate.
It should also be noted that this statement of OpenAI obviously does not remove the concerns. The published blog post clearly states that users’ chat data can be used as a training data. Yes, it is stated that this data will not be used for advertising or revenue purposes, but the question marks are not resolved. What do you think about it? Are the measures taken by OpenAI sufficient?