In the Deputy Editor’s Notebook | Running behind a TGV to catch up…

The profession, the media, the newsroom The PressAnd you.




Last October, in this section, I solemnly committed myself in the name of The Press never publish images generated by artificial intelligence without clearly indicating that they were produced by a machine1.

It took barely four months for a reader to call me harshly by email, sending me an attachment from an AI image that, according to him, had ended up in The Press without mention.

“This use seems unacceptable to me, particularly in a renowned newspaper like yours, whose qualities I constantly defend to those who are losing confidence in the traditional media,” François Dugas wrote to me.

And to really hit the nail on the head, he pointed out to me that the image generated by AI had been published in the Sunday, February 4 edition, in which we published a large article on… the dangers of AI2 !

I would like, dear readers, to use this email to prove to you that our guidelines3 are as strong as our safety net and the reader was wrong.

Alas. He was right. And we immediately corrected the error.

Let me explain, because here lies the danger of AI today: in its ultra-rapid evolution. A development that forces us to run behind a TGV to try to catch up.

Did we in turn accidentally publish the image of a person with three legs? Have we been fooled by a fake photo that is almost identical to a real photo?

Not really. It’s more complicated than that… and it will become more and more complicated!

The image we published in the Science section is this one:

IMAGE GENERATED BY ARTIFICIAL INTELLIGENCE

AI-generated illustration depicting planned drilling in Krafla, Iceland

It was intended to illustrate planned drilling at Krafla, Iceland, as part of a geology research program4.

I will spare you the misunderstanding that allowed the drawing – which has since been removed due to the error – to end up in The Press. The important thing is that the source mention (the “photo credit”) did not specify the author of the image, which was therefore the AI.

We didn’t know it, mea culpa.

The story immediately made me think of that of Samuel Larochelle, a collaborator of The Presswho signed a book with an AI-generated cover a few months ago, which we recently told you about5.

He didn’t know either, in the same way that Éditions Druide didn’t know. The drawing, otherwise magnificent, had simply been purchased in an image bank, without looking at the details attached to it.

These are cases more complex than the images that have been talked about a lot in recent months: the Pope in a white down jacket, Donald Trump struggling under arrest, Taylor Swift in sexually explicit outfits and positions.

Relatively simple cases, because you just have to ask yourself if it’s a real photo, yes or no? In all three cases, the answer is no. Point.

But behind it, there is a whole much more complicated world. Images, like those we took for an artist’s or graphic designer’s illustration. Illustrations purchased from image banks. The retouched photos provided to us without mention of the filter used.

And even more complicated: images manipulated by AI… only in part. Since our error, our journalist Mathieu Perreault has dug into the matter a little to discover that certain NASA and ESA images are sometimes removed from clouds thanks to AI. We also told him about astronomical drawings made by artists (a probe passing near an asteroid, for example) with a starry background, which is generated by AI. Or even tools that allow AI to “process and clean satellite images”, which are sometimes sent to the media. No mention of AI.

The questions are therefore numerous.

How can we ensure the authenticity of an entire image, photo, or drawing?

When is it “an AI-generated image”?

Where do we draw the line between Photoshop-style adjustment and AI?

And when is it simply a question of retouching or processing whose robotic origin can be concealed without there being any ethical fault?

In other words, when is it “bad”?

Do all additions made by a machine have to be reported? Even the filters? Even the tools aimed at “processing, cleaning and homogenizing” satellite images?

So many questions we ask ourselves in the newsroom of The Press, as AI and its tools evolve. Or at the very least, trying to keep up with the ultra-rapid evolution of its tools!

Tools that existed before (Photoshop retouching in the fashion world), but which are now so accessible and so quick to execute that we realize that anyone can do it at any time.

The latest one circulating a lot on the networks: business portrait style photos generated by AI. You know, those office profile photos we use on LinkedIn: today you can generate them artificially by sending a few real photos of yourself to a machine.

I predict that someone will soon try to get it to us The Press, without warning us of the process of course. And then, it will be videos, like those that Sora, OpenAI’s new tool, is capable of generating in a few seconds6.

We will be vigilant, and we will even redouble our vigilance, in the context.

I am finishing my text as I did last October: the important thing is to clearly distinguish truth from falsehood, at all times. A task that has always been part of the journalist’s duty, but which will become even more important with time. And which will, above all, become much more complex.

1. Read the deputy editor’s notebook “AI: how to distinguish the real from… the robot?” »

2. Read our report on the dangers of AI

3. Check out our guidelines

4. Read the text about Krafla drilling

5. Read the story of Samuel Larochelle

6. Watch a video produced by OpenAI’s Sora tool

Write to François Cardinal


source site-63