“Cosmos,” published by an Australian state-backed organization, used Open AI’s GPT4 to write six articles, some of which contain inaccuracies and inaccuracies.
Published
Updated
Reading time: 2 min
In Australia, the magazine Cosmospublished by the National Science Agency, is a reference in scientific publications. But since it published six articles generated by artificial intelligence on its website, the title has found itself at the heart of a controversy, explained the local media ABC on Wednesday, August 7. Quoted by AFP, the president of the Association of Science Journalists, Jackson Ryan, points out simplifications, even errors, in this content generated by a tool that uses GPT4, the AI of the American firm OpenAI.
One of the AI-generated articles, titled “What happens to our body after death?”explains in particular that signs of rigor mortis appear three to four hours after death. However, according to the specialist, scientific research is less precise on this subject. The description of autolysis, a process in which cells are destroyed by their enzymes, is simply inaccurate, continues Jackson Ryan, fearing that these inaccuracies could harm the trust and perception of readers.
A spokesperson for the National Science Agency said the AI content had been verified by a “qualified scientific tool edited by the ‘Cosmos’ publication team”. According to a spokesperson for the publishing company, Cosmos quoted by ABC, the experience is “in constant evolution”. A process which, according to him, “may involve changes in the way we program the tool [et] in the use we make of it”adding that it will also be up to them to decide “if we continue to use or develop this tool after the project”.
The magazine Cosmos has also drawn criticism for using journalism grants to build its artificial intelligence capabilities, as the use of AI becomes a major battleground for publishers, the president of the Association of Science Journalists told ABC. “In the end, [utiliser l’IA] is used to save money”believes Jackson Ryan, fearing that these developments will be to the detriment of journalists.
The use of AI by news outlets is also raising serious concerns in the United States. New York Times recently sued OpenAI and Microsoft in a US court, alleging that the companies’ powerful AI models used millions of articles for training without permission.