Meta’s LLaMA further ignites AI technology battle
On Friday, Meta Platforms Inc. stated that it released to researchers a new large language model, the core software of a new artificial intelligence (AI) system. This heated up an AI arms race as Big Tech companies rush to integrate the technology into their products and impress investors.
Dominating the AI technology space has initiated a public battle and actually kicked off late last year. This happened with the launch of Microsoft-backed OpenAI’s ChatGPT and prompted tech heavyweights from Alphabet Inc. to China’s Baidu Inc. to trumpet their own offerings.
On the other hand, in its blog, Meta’s LLaMA, short for Large Language Model Meta AI, has said that it will be available under a non-commercial license to researchers and entities affiliated with the government, civil society, and academia.
In order to summarize information and generate content, large language models mine vast amounts of text. They can answer questions, for instance, with sentences that can read as though written by humans.
Meta said that the model requires “far less” computing power than previous offerings. Also, it is trained in 20 languages with a focus on those with Latin and Cyrillic alphabets. Senior software analyst at D.A. Davidson Gil Luria said,
“Meta’s announcement today appears to be a step in testing their generative AI capabilities so they can implement them into their products in the future. Generative AI is a new application of AI that Meta has less experience with, but is clearly important for the future of their business.”
AI has emerged as a bright spot for investments in the tech industry. Their slowing growth has prompted widespread layoffs and a cutback on experimental bets.
According to Meta, LLaMA could outperform competitors that examine more parameters or variables that the algorithm takes into account. Specifically, it claimed that a version of LLaMA with 13 billion parameters can outperform GPT-3, a recent predecessor to the model on which ChatGPT is built.
It described its 65-billion-parameter LLaMA model as “competitive” with Google’s Chinchilla70B and PaLM-540B. In fact, these are even larger than the model that Google used to show off its Bard chat-powered search.
A Meta spokeswoman attributed the performance to a larger quantity of “cleaner” data and “architectural improvements” in the model that enhanced training stability.
In May last year, Meta released a large language model OPT-175B. In addition, it aimed at researchers, which formed the basis of a new iteration of its chatbot BlenderBot.
Raphael is a person born between the generations of Millenial and Gen Z. He was produced by Cavite State University (Main Campus) with a bachelor's degree in Political Science. The lad has a fresh take on things, but can still stay true to his roots. He writes anything in Pop Culture as long as it suits his taste (if it doesn't, it's for work). He loves to wander around the cosmos and comes back with a story to publish.