We will perhaps one day have “AI-free” logos and certifications, just as there are “no sugar” or “no GMO” labels.
But in the meantime, the greatest vigilance is required with artificial intelligence, a technology that is shaking us up at a disturbing pace, which poses innumerable ethical issues, and which further blurs the lines demarcating truth from falsehood.
It is for all these reasons, and many others, that the Trudeau government is these days inviting technology companies to sign a “voluntary code of conduct” which governs advanced generative AI systems.
And this is why we have worked hard in recent months, together with our legal department, to develop our own guidelines on artificial intelligence* aimed, again, at regulating the use of these powerful technological tools with a single thing in mind: ensuring that you, the readers, never have any doubt as to the authenticity of the texts, photos and images published in The Press.
You may have seen this image of the Pope in a white down jacket on social media, thinking it was true (I plead guilty!). Or again, that of Emmanuel Macron as a garbage collector, or that of Donald Trump who is struggling in the middle of a violent arrest (five months before he surrendered to the authorities).
1/3
Already, confusion is setting in between the real thing and the robot! And these are just the beginnings of what we call generative AI, or artificial intelligence that produces content. It can be text, photos, voice, even song or video, like this fake performance by Drake and The Weeknd.
So, imagine a world in which you consult a piece of media… without always knowing whether what you are reading was written by a human or a machine, without knowing whether the photo you are looking at represents a scene that actually existed.
Unthinkable.
Or at least, we think it must remain unthinkable.
And this is why we have drawn up guidelines that could be described as severe for the use of our journalists.
In a word, unless otherwise authorized, it is flatly “prohibited to use external generative AI tools to produce texts, photos, videos or images intended for publication”.
In this case, if it is not allowed, why have five pages of guidelines been developed? Because there are all kinds of scenarios where AI can be useful to journalists, without it being to generate content for publication.
There is search, for example: the giants are seeking to combine their search engine and their AI tools. A reporter could therefore ask the latest version of Bing, for example, what date marks François Legault’s debut in politics (answer: “He began his political career on September 23, 1998”). Then simply cross-check the information, which is required by our guidelines.
There is automation: AI allows translation and transcription of interviews, which we enable. Here again, it is imperative that the journalist verifies its accuracy, by carefully rereading the original text and the translation carried out by the online tool Deepl, for example.
Our golden rule: “AI must be used as a tool to serve journalists and not as a means of replacing them. »
For example, we prohibit journalists from using AI to summarize large documents, because that would mean asking the robot to do the work for them. A work that would necessarily be imperfect, because the machine, however intelligent it may be, would not have the journalistic judgment necessary to capture all the important portions of a document.
As proof, I cite the news published last July by Louis-Samuel Perron about this woman who was sexually assaulted, then refused by the Montreal General Hospital because she speaks French.
The story was contained within a nine-page judgment which did not deal with this subject, but with sexual assault as such.
An AI-generated summary would have simply indicated that Martin Jolicoeur was sentenced to 18 months in prison for sexually assaulting his friend in July 2020, without dwelling on the troubling angle of the victim’s hospital admission .
Last point of importance: transparency.
If, in certain specific circumstances, AI is tested or used to create content intended for publication in The Pressthis must not only be authorized by the information department, it must above all be clearly indicated to eliminate any doubt (as for the photos which illustrate this article).
For example, last March we allowed journalist Charles-Éric Blais-Poulin to use ChatGPT to ask him for “a list of journalists who have been targeted by allegations of sexual misconduct.” The aim was of course to demonstrate the lack of reliability of the machine, which indeed generated the requested list, but by including colleagues who had never been targeted by any allegation whatsoever!
Another example: last spring, journalist Katia Gagnon was allowed to use ChatGPT to write the story of a dog who saves a family from a burning house “using Katia Gagnon’s writing style” .
The clearly stated goal was to document the ease with which a credible story can be created from scratch, supported by photos, in order to demonstrate the risks of misinformation online.
The text currently on the web is thus preceded by a paragraph indicating that this story was generated by artificial intelligence. And the photos are marked “IMAGE GENERATED BY ARTIFICIAL INTELIGENCE”.
Because again, the important thing is to clearly distinguish, at all times, the true from the false. A task that has always been part of the journalist’s duty, but which will become even more important as AI tools develop.
In fact, the role of information professionals will increasingly become that of “authenticator of the truth.”