OpenAI has updated its models
GPT-4 and GPT-3.5-turbo will have a feature called “Invoke Functions” that will allow developers to create chatbots that can outsource requests made to external tools such as ChatGPT plugins, as OpenAI explains in their blog post. In addition, OpenAI has added features such as converting natural language to database queries.
Longer printouts are possible
Beyond the function call, OpenAI offers a GPT-3.5-turbo variant with a greatly expanded context window. Measured in tokens or bits of raw text, the context window refers to the text that the model considers before generating any additional text. Models with small context windows tend to “forget” even the content of very recent conversations, often causing them to go off topic in problematic ways.
The new GPT-3.5-turbo offers four times the context length (16,000 tokens) of its predecessor at twice the price. OpenAI says that in this way, with the renewed model, about 20 pages of text can be retrieved at once.
Prices have dropped significantly
Text-embedding-ada-002 now costs $0.0001 per 1,000 tokens, which is a 75% discount from the previous price. OpenAI says this reduction is possible through increased efficiency in their systems.
OpenAI signaled that it will gradually update existing models following the launch of the GPT-4 in early March. So, the firm is not working on GPT-5 but actually GPT-4 is getting better day by day.