A new competitor to OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing is coming. Meta company, formerly known as Facebook, introduced its own artificial intelligence called LLaMA. The firm’s CEO, Mark Zuckerberg, announced this new initiative in a Facebook post.
“Today we are releasing a new cutting-edge AI major language model called LLaMA, designed to help researchers advance their work,” Zuckerberg said on his social media platform. We think it holds a lot of promise in more complex tasks like predicting protein structures or protein structures.” However, the Meta administrator has not disclosed exactly which of these tasks LLaMA is currently capable of performing.
In fact, the only detail Zuckerberg provided in his announcement about the large language model was his company’s statement that “it is committed to this open research model and will make its new model available to the AI research community.”
In a company blog post published later that included a link to a full AI research paper and Github model card, Meta offered significantly more information. “Like other major language models, LLaMA works by taking a set of words as input and recursively predicts the next word to generate text,” the company wrote. Meta added that LLaMA is trained on text from 20 different languages. The model card states that the training data includes publicly available texts from CCNet, C4, Wikipedia, ArXiv, and Stack exchange.
Meta describes its AI as a “smaller base model, requiring far less computing power and resources” than other major language models, and says LLaMA will be available in multiple sizes. The company also emphasizes its commitment to transparency and responsible AI development and reiterates that only AI researchers will be allowed access.
Right now, it’s hard to say how LLaMA will stack up against other tech giants’ AI initiatives in a rapidly congested space. However, almost every major language model ever released has come with its fair share of glitches. Google and Microsoft have integrated AI text generators into their search platforms, and both have unwittingly advertised false information. Then there are unresolved questions such as copyright, privacy, whether artificial intelligence can develop “emotions” and how to manage workarounds that many on the internet have found to bypass restrictions and produce offensive content and even malware.