fake news, hateful messages… Why network X looks like a digital battlefield

Since the Hamas attacks in Israel on October 7, X has been one of the social networks most overwhelmed by violent, hateful or false content. Trends amplified by the decisions of his boss, Elon Musk.

On the covers, three bloodied bodies of children. Inanimate. The camera scans the ground, revealing around fifteen similar blankets. This video published on X (formerly Twitter) by several anonymous accounts claims to show Palestinian victims of Israeli bombings on Gaza. Are these images real? Impossible for a user of the social network to be sure of this.

However, in the comments, everyone has formed a clear opinion. Between the lines of “prayer” emojis (🙏), a user calls for revenge, believing that “the culprits will pay dearly”. To which others respond by calling it a conspiracy, ensuring – without further proof – that the images were filmed “in Syria” or that they were “created by artificial intelligence”. Another directly attacks the author of the tweet: “Don’t you have any photos from October 7?”

Since the deadly attacks by Hamas in Israel that day, and the massive military response of the Jewish state in Gaza, the conflict continues every day on social networks. And it doesn’t take long to be thrown into the violence of this reality, especially on X.

Raw images and knee-jerk reactions

Extremely graphic images of war are circulating, whether from the Israeli authorities, Hamas or civilian witnesses. Shared thousands of times, this content constitutes the only images available to illustrate the conflict. But on social networks, they sometimes find themselves relayed without any context (origin of the images, date, exact location, etc.) or with truncated or even misleading presentation texts.

Discussions don’t always need images to ignite. The slightest opinion, even expressed lip service, can trigger knee-jerk reactions. Here, a message of support for the Palestinians is immediately accused of “support for terrorism”. There, a user who tweets in solidarity with the Israeli hostages of Hamas immediately sees himself attached the label of “colonizer”.

Invectives that can invade any publication, including those which a priori have nothing to do with the political context. And some try to ride the wave of other news to highlight their message, like these publications including a video of the Israeli assault on Al-Chifa hospital accompanied by hashtags like #StarAcademy or #HungerGames, in order to capture even more more audience.

A network designed for “little sentences”

These observations are partly as old as the platforms themselves. “Twitter is not a tool that promotes reasoned debate, that’s obvious,” summarizes Jamil Dakhlia, professor of information and communication sciences at the Sorbonne Nouvelle. “Firstly for technical reasons, such as the limited number of characters which encourages impactful content” – potentially at the cost of nuance. A trend that affects political communication in general, but exacerbated by the network.

There are also social codes to satisfy. “On social networks, and on Twitter in particular, we open up about ourselves, we report on what we do, which often leads to emotional expression,” underlines Jamil Dakhlia, author of a study on political communication on Twitter.

“Twitter promotes political communication, not pedagogy.”

Jamil Dakhlia, professor of information and communication sciences at the Sorbonne Nouvelle

at franceinfo

And once a tweet is in the wild, the user has no control over how it is received. “There is a very strong identity assignment effect: depending on the characteristics that we attribute to a user (origin, political position, profession, etc.), we will interpret their statements in a certain way.” describes Jamil Dakhlia. An effect which is not specific to social networks, but which explains why any position can trigger aggressive reactions, for a personality such as an average user.

“People think in terms of ‘Which side are you on?’ rather than ‘What is true?'”, deplores Eliot Higgins, founder of the investigative collective Bellingcat, to the American media Fast Company. “And if you say something that disagrees with my side, you must be on the other side.”

“This makes it very difficult to participate in conversations around these topics, given how divided they are.”

Eliot Higgins, founder of the investigative site Bellingcat

to the American media Fast Company

As such, X occupies a special place in the digital landscape. “It is a particularly political platform, it is widely used by politicians, journalists… Its content acts on the broader political and media agenda,” recalls Jamil Dakhlia – hence the fact that each declaration is particularly scrutinized and can take on apparently disproportionate importance.

A largely failing moderation

X is not the only network that has to deal with hateful or misleading content. But since the takeover of the social network by Elon Musk in October 2022, its moderation capabilities have been slashed. The billionaire has fired more than 80% of the platform’s employees, according to an interview with the BBC. To the point that X only has 52 French-speaking moderators worldwide, according to a report submitted by the company to the European Union.

It is therefore difficult to cope with the flood of violent or misleading content. In a statement published on November 14, “acted on” more than 325,000 publications that violated its general conditions of use between the attacks of October 7 and November 14. But, the same day, a report released by the Center for Countering Digital Hate claimed that reporting hateful content had almost no negative effect on its visibility.

Elon Musk’s network is not only the victim of these false or violent publications: it encourages them. Because today, users who pay for the “X Premium” subscription can be paid based on the engagement generated by their publications. Publishing incendiary content that provokes outraged reactions can therefore bring big returns, more in any case than more consensual comments.

Posts from these paid accounts are even highlighted by the algorithm, regardless of their veracity. Propaganda operations can therefore have a field day: paid accounts are responsible for the majority of the most viewed false publications on the platform, according to a report published by the company Newsguard in October.

“News and historical memory collide”

If this apparent cacophony may seem even louder in recent weeks, it is largely because the Israeli-Palestinian conflict is of notable importance, at the intersection of many burning political issues – religion, terrorism, human rights, etc. “The debate is particularly trapped in France, where many people have emotional, family, personal ties to Israel and also to Palestine,” underlines for Telerama Lætitia Bucaille, professor of political sociology at Inalco.

Social media is obviously not the only place where these issues are discussed. “The public debate can continue elsewhere and in other formsrecalls Jamil Dakhlia. But with the role of social networks and soundbites in the media-political agenda, we have the impression that it is these cookie-cutter positions that dominate.”


source site-33