Hate on Facebook | A look at a moderation weakened by the pandemic

(Montreal) Disclaimer: This article cites hateful comments, in order to shed light on them.

Posted at 6:37

Clara Descurninges
The Canadian Press

The moderation of hate speech on Facebook ― already criticized before the arrival of the pandemic ― took another hit with the confinements, certain reports could not be reexamined, for lack of human resources. Many of them target the LGBTQ+ community.

“To ensure our platform remains a safe place for people to connect, we prioritize content that has the greatest potential to cause harm. This means that some reports will not be reviewed as quickly as before, and we will not look into some reports, “said Facebook Canada communications director Meg Sinclair in an email last June.

To test this, The Canadian Press reported about 100 transphobic or homophobic comments found on public publications in the past year. When these were still kept online, a request for reconsideration of the complaint was then sent.

To date, the pandemic-related issues have not abated, and reconsideration requests have intermittently been dismissed.

Meta, the parent company of Facebook, declined to participate in an interview, preferring to communicate by email.

Between the cracks

A company spokesperson wrote in February that “Meta strongly opposes hate. We do not allow hate speech on our apps because it creates an environment of intimidation and exclusion and, in some cases, can promote violence.”

Regarding the effectiveness of the said system, Meta insists that “the most important measure is to focus on the prevalence, ie the amount of speech that people actually see on the platform”.

In its report for the third quarter of 2021, the company announced that out of 10,000 content viewed by users, only three are hateful, a number that is constantly decreasing.

“When you project this on a planetary scale with billions of users, so billions of publications every day, 0.03% of hateful content is huge,” remarked Professor David Myles, who is a member of the Research Chair on Sexual Diversity and the Plurality of Genders at UQAM.

The researcher, who is interested in the impact of digital media, added that 0.03% is an average, and therefore “not really a figure that is important for understanding the experience of marginalized communities”, as “there is really groups that are going to be more targeted than others”.

Additionally, questions are being raised about the validity of the estimate, as a sizable amount of hateful comments slip through the cracks.

Messages such as “because homosexuality is a disease and its adherents should be cured” or “having intimate relationships between people of the same sex is against nature”, have not been removed after a report, or even after a request for reconsideration. Many others, mostly left online, contained expressions of disgust or cited transphobic or homophobic religious beliefs.

Some comments, including “the world will soon be rid of these viruses you call LGBTQ,” survived a first round of moderation and could not be reviewed by Facebook due to lack of resources.

However, the social network prohibits “violent or dehumanizing speech, offensive stereotypes, an assertion of inferiority, an expression of contempt, disgust or dismissal, an insult or a call for exclusion or segregation” against protected characteristics, which includes sexual orientation and gender identity.

When The Canadian Press contacted Meta with a sample of comments, including all those cited in this article, the company deleted them. Asked that normal reporting didn’t solve the problem from the start, she explained that “unfortunately, zero tolerance doesn’t mean zero incidents.”

The arrival of artificial intelligence

Meta now has 40,000 safety and security employees, but in recent years the company has increasingly relied on artificial intelligence (AI) to spot hateful posts before they even happen. do not cause harm.

“AI now proactively detects 94.7% of hate speech we remove from Facebook, up from 80.5% last year and 24% in 2017,” reads a text from the company’s blog, November 2020.

Professor Myles praised the social network’s desire to act “proactively” to “prevent the circulation of this content before the users who are targeted come into contact with it, and it could cause violence. or psychological distress.

However, he doubts the ability of any program, no matter how advanced, to handle more complex comments.

It’s a conclusion also endorsed by Meta, which asserts that when content is “more contextual and nuanced,” AI subjects it to “manual scrutiny.”

But artificial intelligence or human eyes, a misplaced letter can probably make all the difference. Among the experiences made by The Canadian Press, a comment denigrating “LGPT p* dos” was taken down on the first report, while another that speaks of the “LGPTP* DO movement” (sic) had no problem to stay online.

The consequences of hate

Hateful messages, “we see a lot of them”, confided the director of the Quebec LGBT Council, Ariane-Marchand Labelle, citing the fertile ground “of news articles or open letters on trans issues”.

“I also think their definition of hatefulness, it has to be very narrow,” she said, recalling flagging several comments that Facebook deemed acceptable.

She recalled that “it’s as violent and difficult as if it had been said in a school environment, a workplace […] it reiterates traumas that are often already there. These things, we have also heard them in our lives. »

“The majority of the population is very little victim of hateful acts and does not necessarily measure the extent of the phenomenon,” added the scientific and strategic director at the Center for the Prevention of Radicalization Leading to Violence (CPRMV), Louis Audet. Gosselin.

He argued that exposure to these speeches, “is a factor that accelerates radicalization”.

Sexual minorities in Quebec are 2.8 times more likely to be targeted by a hateful act, reports a 2021 study by the CPRLV, recalling that attacks against members of the LGBTQ+ community “would notably have significantly higher levels of violence. higher than other types of hate crimes.

A few steps forward

If a comment survives all the steps, it is possible to send a request to the Supervisory Board, a jury made up of 22 independent specialists.

Although the Council was founded in 2020, it is only since April 2021 that Internet users can call on it to delete content, and not just restore it. However, the chances of accessing it are slim, since the Council itself selects the requests it wishes to examine. In two years, 21 decisions have been rendered.

Meta had announced since August 2020 that future community standards enforcement reports would be audited by an independent firm, as “no company should correct its own homework, and the credibility of our systems should be earned, not assumed”. The last report of 2021, when it is published, should be the very first to be audited.

If you are a member of the LGBT+ community and are experiencing distress, you can contact Interligne by phone or text at 1888-505-1010, or by email at [email protected].

This hotline is anonymous, free and available 24 hours a day.


source site-52