Meta, the unexpected evangelist of open-source AI

(Washington) Mark Zuckerberg, founder and CEO of Meta (Facebook, Instagram), has become an unexpected figurehead in the development of open-source artificial intelligence (open source in English), versus the closed models of OpenAI (ChatGPT) and Google. An update on this recurring debate in technology.


Open or not?

When software is open source, its programming code is freely accessible: developers can tinker with it and tweak it as they wish. Open source enthusiasts have historically opposed companies that want to protect their intellectual property and source of revenue.

Some fundamental Internet technologies, such as the Linux operating system and the Apache web server, are open source. But open-source software is often more difficult for non-technical users to use, more difficult to maintain in terms of quality, and, by definition, less financially viable.

Llama is not alone

Llama 3.1 is the latest version of Meta’s generative artificial intelligence (AI) model, capable of producing high-quality content, based on a simple query in everyday language. Its performance is comparable to that of GPT-4 (OpenAI) or Gemini (Google), although for now it only processes text (no visual or auditory comprehension).

Other companies are letting developers access the technology for free and adapt it to their specific needs, without having to hand over their data, including startups xAI (founded by Elon Musk) and France’s Mistral. Google also launched Gemma as open source in February, but Gemini (paid) is more powerful. Meta and Google have the financial means to offer a free solution, but the upfront costs of training the models are huge, making it difficult for startups to get started.

Meta’s Conversion

“Open-source AI is the way forward,” Mark Zuckerberg said in the headline of a letter Tuesday. Meta open-sourced a previous version of Llama a year ago, but its approach continues to surprise, coming from the parent company of Facebook and Instagram, ultra-closed social networks.

Read “Meta opens its source code to everyone”

The Cambridge Analytica scandal, the British company that used Facebook to harvest user data for harmful purposes, has made the Californian group more cautious. But the bitterness towards Apple, which strictly regulates external applications on its iPhones, seems to have influenced Meta.

According to Yann LeCun, Meta’s top AI expert, the technology can only advance in collaboration with other researchers and engineers. Skeptics, however, say Meta is pushing open source because it’s a latecomer to the AI ​​party and is trying to differentiate itself.

Less security?

Proponents of the closed model argue that open-source AI opens the door to potentially dangerous abuses. Meta, on the other hand, argues that transparency guarantees more security, since models can be closely examined, and also more competition, instead of letting a few large companies and powerful countries, such as China, maintain control.

At this point, models are mostly capable of regurgitating data. But what will happen “when open-source programs can reason, design, and execute complex plans?” asks tech journalist Casey Newton, referring to the shared ambition of all tech giants: to develop AIs as intelligent as humans.


source site-55