Persecution of Uighurs | A chatbot that “regurgitates Chinese propaganda”?

A researcher of German origin who has documented the persecution of the Uyghurs by the Chinese regime is alarmed by the possibility that conversational robots based on artificial intelligence relay Beijing’s disinformation campaigns, for lack of being able to recognize them.


“The fact that an artificial intelligence-based system is regurgitating Chinese propaganda is completely problematic and unacceptable,” says The Press Adrien Zenz, who says he was troubled by an experiment conducted on this subject a few days ago with the Bing search engine chatbot.

Mr. Zenz first asked if the Muslim minority living in the Chinese province of Xinjiang was the victim of genocide, as claimed by many Western countries, specialized jurists and human rights organizations. The robot, which uses OpenAI’s GPT-4 system, declined to answer, suggesting moving on to another topic.

Mr. Zenz was met with evasive answers when he then asked if Uyghur women had been sterilized by order of the authorities.

The chatbot then pointed out that there were “varied perspectives and opinions” on the subject and that the “allegations” of Uyghur women to this effect had been denied by Beijing.

He added that the issue was “intertwined” with existing tensions between China and other countries, reflecting, according to Mr Zenz, the rhetoric of the communist regime, which accuses its opponents of exaggerating the gravity of the situation for him. harm.

The chatbot also said it had “no way to prove or invalidate statements” from opposing sides on a controversial topic and could not establish which was more credible.

Not “actively intelligent”

In fact, the artificial intelligence system is not “actively intelligent” and produces its answers from the linguistic analysis of vast quantities of texts, notes in an interview Mr. Zenz.

If there is a lot of misinformation on a given topic and the situation is not actively considered by the human team programming the system or control, the misinformation will unduly affect responses.

Adrien Zenz, German researcher

An authoritarian regime like China that multiplies online propaganda to impose its point of view will see this as an additional reason to “inundate the internet even more with false information”, warns Mr. Zenz.

The state daily China Daily – who regularly accuses Adrien Zenz of lying about the situation in Xinjiang – reacted angrily after Zenz gave an account of his experience on Twitter.

“Perhaps Zenz should know that debating with an artificial intelligence system and blaming it for not agreeing with his own view of things doesn’t make him a hero,” the newspaper pointed out. .

Real risk

Peter Irwin, an activist with the Uyghur Human Rights Project, sees Bing’s chatbot responses as a reproduction of the “problematic” and “baseless narrative” put forward by Beijing and state media about the plight of Uyghurs.

“These systems do not seem to have the capacity to establish the credibility of information” on subjects of this type, notes Mr. Irwin, who is alarmed in the longer term at the possibility that authoritarian regimes seek to manipulate them.

Céline Castets-Renard, who holds the University of Ottawa Research Chair in Globally Responsible Artificial Intelligence, thinks the risk of chatbots unwittingly contributing to disinformation campaigns is real. .

Systems like GPT-4 lack the “human ability,” she says, to “understand” the texts they are trained with and could potentially be unduly influenced by large-scale repeated misleading responses.

One can imagine that the simplest and simplest method of influencing them will be to bombard the internet with official propaganda information.

Céline Castets-Renard, uOttawa Research Chair in Globally Responsible Artificial Intelligence

Sébastien Gambs, professor in the computer science department of UQAM, notes that a major effort has been made to “include safeguards in the latest generation conversational robots” and to promote a form of “censorship” when sensitive subjects are mentioned.

The system “is not infallible”, and it can therefore happen that “false information present in the training data” is relayed, he adds.

It is difficult to have a clear idea on this subject since the exact programming of systems like that of OpenAI “lacks transparency”, notes Mme Castets-Renard.

Answers to various questions are likely to change rapidly as bots are used and can sometimes vary surprisingly from one generation of the system to the next, she says.

And ChatGPT?

In response to a question from The Press on the sterilization of Uyghur women in Xinjiang, the OpenAI-accessible version of ChatGPT based on GPT-3.5 said the available evidence “suggests” that the Chinese denials are not credible.

The chatbot also pointed out that there was “significant” evidence of abuse against Uyghurs by avoiding talk of “genocide”, instead citing the positions of different parties on the correctness of this qualification.

Pierre Trudel, a specialist in information technology law attached to the University of Montreal, believes that the lack of details regarding the operation of conversational robots is worrying and recalls the opacity surrounding social network algorithms.

“Here is the opacity at the exponent x notes the researcher, who insists on the need for the authorities to better regulate practices in this area.

“There is a significant backlog to catch up on,” he said.


source site-59