• Home
  • Sofware
  • ChatGPT-3.5 Turbo has been fine-tuned: What has changed and how is its performance?

ChatGPT-3.5 Turbo has been fine-tuned: What has changed and how is its performance?

OpenAI customers can now add custom data to the lightweight version of GPT-3.5, GPT-3.5 Turbo, which can improve the reliability of the text-generating AI model while improving certain behaviors. OpenAI uses the chatbot as code completion or ...
 ChatGPT-3.5 Turbo has been fine-tuned: What has changed and how is its performance?
READING NOW ChatGPT-3.5 Turbo has been fine-tuned: What has changed and how is its performance?
OpenAI customers can now add custom data to the lightweight version of GPT-3.5, GPT-3.5 Turbo, which can improve the reliability of the text-generating AI model while improving certain behaviors. OpenAI says businesses can make ChatGPT a more efficient tool by tweaking the chatbot to focus on specific tasks, like completing code or keeping a consistent tone.

Increasing performance and consistency

OpenAI claims that fine-tuned versions of GPT-3.5 can match and even outperform “certain narrow tasks” with the core capabilities of the company’s flagship model, the GPT-4.

“Since the launch of the GPT-3.5 Turbo, developers and businesses have asked for the ability to customize the model to create unique and differentiated experiences for their users,” the company wrote in a blog post this afternoon. This update gives developers the ability to customize better-performing models for their use case and run those custom models at scale.” said.

Through OpenAI’s API, companies using GPT-3.5 Turbo can tweak it so that the model better follows instructions, such as always responding in a specific language. Or they can improve the model’s ability to format responses consistently (for example, to complement code snippets) and improve the tone of the output to better match a brand or voice. The company says it will launch the arguably more advanced GPT-4 this fall for the same purpose.

In addition, the tweak allows OpenAI customers to shorten text prompts to speed up API requests and reduce costs. The company added that early testers were able to make 90 percent shorter prompts after preparing the GPT-3.5 with fine-tuned instructions. Finally, to preserve the security features of the default model throughout the fine-tuning process, the fine-tuned training data will be passed through a GPT-4-powered moderation system to detect training data that conflicts with OpenAI’s Moderation API and security standards.

Comments
Leave a Comment

Details
168 read
okunma41530
0 comments