The use of artificial intelligence by journalists at franceinfo

Listeners wonder about the use by journalists of AI, artificial intelligence, and want to know, for example, what software they work with. Estelle Cognacq, deputy director of Franceinfo and director of the Radio France agency responds to the mediator of Radio France antennas, Emmanuelle Daviet.

According to a recent study by the World Press Organization, World Association of News Publishers (WAN-IFRA) in collaboration with Schickler Consulting, nearly half of newsrooms worldwide already use artificial intelligence tools. We know that journalists see it as a threat to their profession.

But we learn by reading this document that 70% of the participants in this survey rather welcome artificial intelligence, and expect that these tools will be useful to them in their daily lives as journalists. And only 20% of respondents said they have guidelines from their newsrooms on the use of AI in their work, in terms of ethics, for example.

Emmanuelle Daviet: Listeners are starting to write to us about this, and one of them wants to know if artificial intelligence is used by journalists, and if so, what software is it?

Estelle Cognacq: So yes, we have been using software for several years. It’s true that artificial intelligence has come to the fore for a few months, but that’s nothing new. It’s just that it’s a new artificial intelligence, as we say “conversational”, generative. We, we use other artificial intelligences, and this sometimes since five or six years, for the first. Generally, these are software programs that are there to assist our journalists, our editorial teams in the detection and “sourcing” of information, in the exploration of very large databases, and in the automatic transcription of audio. . These are the main uses.

For example, we use software called Trint for automatic audio transcription. That is to say that we will play an audio file, an interview, and it will transcribe the text. It’s a help for our journalists, it allows them to go faster, to then make summaries, exploit them. We use software called Data Miner, which allows us to detect information, events happening through social networks, especially Twitter. So it detects, let’s say, an explosion somewhere, a fire. And that allows our journalist to be alerted. It doesn’t do the job for them at all.

In fact, once again, it’s alert, and then, we have a partnership with a laboratory called Inria, on our true and false unit, for verifying information. It is a tool developed for us by them, which is called State Check and there, which helps us in fact-checking. This means that it will detect, among tweets from politicians or personalities, quantitative elements of figures on which we will then be able to work. And then, once we have chosen this sentence or this element to investigate, we have another part of the software which helps us to go to the Eurostat and Insee databases, to find the real answer. .

What advantage do you see in the use of artificial intelligence in the work of journalists and what are its limits?

So first of all, the goal for us is not the automated production of content, that’s for sure. It is rather to provide tools, technologies to our journalists, to free up time to devote themselves to parts of the work that may have more value, more interest. And then there are things that are more complex to do: going by hand – a human – in databases of thousands of elements, it is not possible.

So the limits, then, we set them ourselves. Today, for example, we don’t use software that generates images, videos or sound, nor ChatGPT, and all this generative artificial intelligence, as they say, today we look at it, we, we is trying to see what we can do with it or not. But here, we are very careful. We have one exception. In fact, it is to deal with a news item that would relate to AI. And for the purpose of pedagogical explanations, we could say: “Look, for some articles we used ChatGPT”, we could say to ourselves: we take the baccalaureate dissertation for example, and say here, what would ChatGPT have written? That is an example, but we will always mention it.

Question from a listener: what measures and guarantees are in place to ensure the ethics of the use of artificial intelligence by journalists?

I told you, we have limited the uses. And above all, we are working on a charter to regulate this use and all these artificial intelligence tools. It will have to guarantee the respect of professional ethics, and a great transparency on its use. We have to tell our listeners, Internet users, readers: this is what we use or don’t use as artificial intelligence software, to mention it, let it be known.

Moreover, this charter will also have to be very, very reactive. The challenges are adapting to changes in these professions and these tools, because it’s very, very fast. We have to be very vigilant, follow what is being done, each time, question ourselves about what we use, and what we don’t use. We have a big stake in vocational training. It’s a world that is changing very, very quickly. There may be the temptation to use these tools without safeguards, without asking questions. And we, at franceinfo and Radio France, want to support these innovations, that’s obvious. The world changes, it evolves. But on the other hand, wisely, and respecting the values ​​and ethics of a public service medium.


source site-15

Latest