Apple is developing multiple models
According to the reports, Apple’s unit working on dialogic artificial intelligence is called “Basic Models”. There are about 16 members of this team, including several former Google engineers. This unit is led by John Giannandrea, head of artificial intelligence, whom Apple hired in 2018 to help develop Siri.
On the other hand, other teams at Apple are also working on artificial intelligence. One Visual Intelligence unit is developing a rendering model, and another group is investigating “multi-modal AI” that can recognize and also reproduce images or videos as well as text. These models can be targeted products that can serve a variety of purposes. For example, one of these models can be used for multi-step tasks with Siri, while another can interview customers for AppleCare.
According to information from those involved in the development process, Apple’s most advanced major language model (LLM), known as Ajax GPT, is trained on “more than 200 billion parameters.” This size makes the pnu more powerful than OpenAI’s GPT-3.5 used in ChatGPT. Earlier reports said that this model was used internally and kept secret.