Noting the growing use of artificial intelligence (AI) in newsrooms, the Quebec Press Council (CPQ) recently added a new principle to its journalistic ethics guide in order to regulate its use and ensure of the quality of information transmitted to the public.
“Artificial intelligence can be extremely useful in enriching journalism. But it can also lead to deviations […] This is our role [au CPQ] to guide the media and journalists to [les] avoid and offer recourse to the public,” explains Caroline Locher, secretary general of the media tribunal of honor, in an interview.
Under this new principle, all AI-generated media content must undergo “human editorial review” before being released to the public. This content – image, text or sound element – must be “clearly identified as such” to remove any doubt among the public as to the nature of the information broadcast. Identification must be integrated directly into content generated by AI in order to “avoid any sharing without identification” on other platforms, it is specified.
“The principle remains very broad to be able to coexist with the standards that the various media will put in place,” underlines M.me Locher. It will be re-evaluated in the coming year, as technology advances. Because we see it: it’s going extremely quickly. »
The problem is obviously the corruption of the truth. We know that these models like ChatGPT invent things when they don’t find the right answer, and there is therefore a risk of relying on them blindly.
In fact, newsrooms have been using AI tools for several years as part of their work to collect information, cross-check facts or distribute content. In 2021, the “Pandora Papers” investigation, by the International Consortium of Investigative Journalists, for example used an algorithm capable of searching millions of online documents to prove the existence of tax evasion everywhere in the world. world.
At the end of 2022, the arrival of ChatGPT changed the situation and pushed the level further. OpenAI’s chatbot uses the latest human language understanding technology to generate coherent responses on a variety of topics. And since it is accessible to everyone, it can end up in the hands of malicious people.
“The problem is obviously the corruption of the truth. We know that these models like ChatGPT invent things when they do not find the right answer, and there is therefore a risk of relying on them blindly,” worries Mme Locher, who recalls that today we see more false articles, false images or false videos circulating online.
Caution in Quebec
To date, the CPQ has not received any complaints regarding an AI-generated media publication. “We haven’t seen any deviations here. In the United States, on the other hand, there was the scandal of the fake journalists of the magazine Sports Illustrated, the articles full of falsehoods generated by AI at CNET. In France, the media use it to shorten articles from correspondents,” she says.
In Quebec, traditional media are still in an experimental phase. Most are already using AI in their newsrooms to do interview transcriptions, translations, SEO research or social media posts.
Some have also ventured a little further: for the past year, Quebecor’s digital radio station, QUB, has been using ChatGPT to write short journalistic texts to accompany audio extracts taken from radio broadcasts.
The idea is to improve efficiency and free humans from non-value-added tasks.
For its part, Radio-Canada has for several months been offering a voice synthesis of articles published on its website. “This is a feature in testing, offered to only a percentage of our Internet users, to gauge whether there is an appetite for it. […] We had wanted to offer it for a long time, for reasons of accessibility in particular, AI allowed us to do it more quickly,” explains Crystelle Crépeau, first director of magazines and digital information.
Management
The public broadcaster recently unveiled its AI use charter, following in the footsteps of The Press, which released its guidelines last fall. On their side, The duty and Quebecor indicate that they are currently looking into the issue.
Everyone agrees and joins the CPQ: caution, rigor and transparency are required. The idea of having human approval before publishing AI-generated content is also essential.
“The idea is to improve efficiency and free people from non-value-added tasks. […] But we must be extra careful, validate and revalidate, because the tools are imperfect and the conversational robot has a tendency to make up ideas,” points out the director of the DutyBrian Myles.
The Press has precisely prohibited its employees from “using external generative AI tools in order to produce texts, photos, videos or images intended for publication”. And it prefers to develop its own tools internally.
“It’s an opportunity for the media, journalists, to prove even more than ever that they are authenticators of the truth,” says the deputy editor of The Press, François Cardinal. It was our job and it will become even more so in the coming years. The other side of the coin is this increased vigilance, that’s for sure. »