• Home
  • Internet
  • Artificial intelligence era begins with GPT-4 in content control

Artificial intelligence era begins with GPT-4 in content control

Content moderation has been viewed as one of the internet's most thorny issues for decades. Considering the subjectivity of deciding what content should be allowed on a particular platform, efficient execution of this process...
 Artificial intelligence era begins with GPT-4 in content control
READING NOW Artificial intelligence era begins with GPT-4 in content control
Content moderation has been viewed as one of the internet’s most thorny issues for decades. Considering the subjectivity of deciding what content should be allowed on a particular platform, it is difficult to run this process effectively, even for professionals. But it seems that ChatGPT manufacturer OpenAI thinks it can help with this.

Content moderation gets automated

OpenAI, one of the pioneers in artificial intelligence, is testing the content moderation capabilities of the advanced GPT-4 model. The firm uses GPT-4 to create a scalable, consistent and customizable content moderation system, aiming for the model not only to help make content moderation decisions, but also to develop policies. Thus, targeted policy changes and development of new policies can take from months to hours.

It is claimed that the model can parse various edits and nuances in content policies and instantly adapt to any update. OpenAI claims this allows content to be tagged more consistently. In the coming periods, it is possible for social media platforms such as X, Facebook or Instagram to fully automate the content control and management processes.

Anyone with OpenAI API access can already implement this approach to build their own AI-powered moderation system. However, OpenAI claims that its GPT-4 audit tools can help companies do about six months of work in one day.

Important for human health

It is known that the manual review of traumatic content, especially on social media, has significant effects on the mental health of human moderators. Meta, for example, agreed to pay more than 11,000 moderators in 2020 at least $1,000 each for mental health issues that could result from reviewing material posted on Facebook. Using artificial intelligence to remove some of the burden on human moderators can be of great benefit.

However, AI models are miles away from being perfect. It’s a known fact that these tools are prone to making wrong decisions, so OpenAI acknowledges that people still need to be involved.

Comments
Leave a Comment

Details
193 read
okunma30826
0 comments