Will AI make the media more objective?

ChatGPT, Lensa AI, GPT-3… Artificial intelligence is crossing a new frontier these days, which reinforces its presence in our daily lives, but which also raises essential ethical and social questions. For example: will we be better informed following the appearance of an AI capable of writing like a journalist or even like a university researcher?

AI will make the media more objective? Going back to the sources of the COVID crisis provides the beginning of an answer to this delicate question. Two McGill researchers have used the latest artificial intelligence to determine how the media might have painted a different picture of the notorious coronavirus when it first appeared in Canada, starting in early 2020.

Professor Andrew Piper of McGill’s Department of Languages, Literatures and Cultures and his student and research assistant Sil Hamilton unknowingly ask an even more burning question: as the ChatGPT interface emerges these days, capable of produce information that confuses experts in education, medicine, law and journalism, will we be better served with neutral and objective information by an AI that draws on tens of billions of texts to write your own articles?

The thing to remember is that in the game of objectivity, there is no single right answer, explains to the To have to Professor Andrew Piper. “Of course, GPT is no more objective than journalists. Sil Hamilton adds: “Obviously our AI is not objective. She simulates a CBC that would have been stuck in 2019. That’s her perspective. »

And compared to the “real” CBC News room, the AI-simulated room offers coverage that focuses more on the severity of the disease than on its impact on humans. “This indicates that the CBC may have changed its editorial stance in response to COVID, which should remind us that news is produced by people — and people are subjective. »

Objectivity, objectivity…

The two Montreal researchers asked the computerized language engine GPT from the American company OpenAI to rewrite topical texts relating to COVID-19 and written between January and May 2020 by CBC News journalists. This model is trained according to several tens of billions of parameters to reproduce human language in the most convincing way possible.

The McGill researchers used a version called GPT-2 that predates 2020, and therefore the COVID crisis.

The disparity between the 5082 texts written by humans and those produced by the AI ​​made it possible to trace the contours of a journalistic impartiality dictated by the evolution of the context as the pandemic took hold in the country. “AI presents COVID-19 primarily as a health emergency and uses a lot of biomedical terms to describe the pandemic, whereas CBC has tended to focus on the human aspect. »

This minimization of the geopolitical angle in the coverage of the disease by the English counterpart of the state-owned company may have had perverse effects. In particular, it could have created the impression within a certain segment of the public that the main English-language media in the country did not pay enough attention to the role of governments in managing the crisis. Several tenors of the conspiracy movement have regularly put forward this argument that the mainstream media had teamed up with the authorities to send a message that suits them.

Would the AI ​​used by the McGill researchers have presented a different picture of the situation, which would have appeased these protesting spirits? Probably not. Because, according to the researchers, the “editorial choice” of the CBC had the opposite effect than that feared by the main promoters of conspiracy theories.

“CBC provided more positive coverage than one would expect,” they say. By showing optimism, we reduce the feeling of fear. In fact, this relative optimism appears over time in the media coverage of the pandemic and could be a reaction to a “rallying effect” around national elites that normally occurs in crisis situations (war, disease, etc. ).

The CBC reportedly prioritized its own treatment of COVID, independent of official communications more focused on the medical aspect of the crisis. If this testifies to editorial independence, there is still evidence of a certain position in the treatment of news, concludes the research.

Predict the next ChatGPT

This questioning of the neutrality of information is not about to end. Especially not after the noisy appearance of ChatGPT, which produces texts in just a few seconds that perfectly fit the form of university research papers or newspaper articles.

However, ChatGPT is unable to guarantee the veracity of its statements. Many mischievous Internet users trick her with not necessarily sneaky questions that make her affirm one thing, then its opposite. In other words, this AI is probably reliable, but it may as well serve the purposes of disinformation.

“What is truth for an AI? Sil Hamilton asks. She may have an encyclopedic knowledge, she does not yet have the experience necessary to determine the true from the false. The answer likely lies beyond ChatGPT, in its next iterations, the researcher predicts.

Andrew Piper nods. “It is reasonable to imagine in the near future that his texts will be as exact as if they were produced by humans”, says the professor, who adds this advice: “You have to prepare for it. There will be benefits and risks. »

A completely objective reflection, to mature collectively before a possible new major global crisis occurs.

To see in video


source site-45