Exploring Lucie: The French AI That Was Recently Deactivated

Lucie, a French AI chatbot, was launched on January 23 but suspended two days later due to criticism over inaccurate responses, particularly in math and history. Developed by Linagora and CNRS, Lucie aims to provide an open-source alternative to major AI models. Designed for educators, it was intended for extensive training but deemed premature for educational use. Misuse by malicious users highlighted the need for safeguards. A relaunch is expected in early March, with improvements planned for summer.

What we have here is a classic case of a false start. Launched to the public on January 23 during a testing phase, the French artificial intelligence known as Lucie was abruptly suspended by its developers just two days later. The generative AI chatbot faced a barrage of online criticism, having been accused of significant inaccuracies in its responses, particularly in areas like history and mathematics. For instance, the platform incorrectly calculated the expression ‘5 (3+2)’ as both 17 and 50.

Lucie is designed to function as an AI chatbot, akin to the well-known ChatGPT model. This French initiative, developed over the course of a year by Linagora in collaboration with the CNRS, is open-source. Linagora emphasizes, ‘All training datasets are open and licensed for public use.’

The primary objective of Lucie is clear: to reduce reliance on the ‘mega-models’ crafted by major tech giants like OpenAI, Microsoft, and Google, as well as emerging competitors like the Chinese startup Deepseek, which has recently launched its own conversational AI model.

A Resource for Educators

Returning to the French context, the Lucie platform was specifically crafted to address the needs of educators. As Michel-Marie Maudet, CEO of Linagora, states, ‘a good starting point’ is essential before expanding its reach. He notes, ‘We recognize that the teaching community mobilizes effectively on specific subjects, often displaying remarkable energy in developing innovative tools and independent technologies.’

The intent behind making the tool publicly accessible was to facilitate extensive training. However, the AI ‘should not be employed in educational or production contexts in its current state,’ as clarified by the company shortly after the suspension. Linagora characterized the launch as ‘premature’, with Maudet acknowledging to TF1info ‘poor communication’ during the rollout, alongside a hint of ‘naivety and enthusiasm.’

In addition to factual inaccuracies, Lucie also fell victim to misuse by malicious users who attempted to manipulate it into mimicking figures like Adolf Hitler. The pressing challenge ahead is to implement ‘safeguards,’ a technology that is currently lacking in French AI models. As the co-founder of Linagora points out, ‘the issue of content toxicity is complex. Is discussing Hitler inherently toxic? Not necessarily.’

Linagora aims to derive valuable lessons from this unfortunate situation, particularly in enhancing ‘education on the usage of these models.’ They seek to clarify what these technologies can and cannot achieve. Maudet emphasizes, ‘There’s a tendency to treat these models like magic solutions to every problem, but that’s simply not the case.’

A relaunch of Lucie for public use is anticipated in early March, initially as a limited test for select users, with a new and improved version expected to be revealed this summer.

Latest