[Chronique de Pierre Trudel] Twitter without limits?

The acquisition of Twitter by Elon Musk highlights the imperatives associated with moderation in social networks. The Network Contagion Research Institute estimated that following Elon Musk’s takeover of Twitter, the occurrence of the n-word would have been multiplied by 500% on the platform. It is as if hordes of racists wanted to “test” the new limits. It is also a reminder of the urgency of giving ourselves the means to distinguish racist uses from legitimate uses of words.

Mr. Musk’s decision to drastically reduce the staff involved in moderating comments on the platform dramatically shows how the mechanisms for separating lawful comments from those who violate the law are too often the big neglected in these “models of business”.

A social network like Twitter is a priori configured to accommodate comments and images from all sources. But we must pay attention to the risks that the platform accepts those that it considers too important to compromise its reputation and its profits. On social networks, it is the markets which, de facto, set the limits of what is tolerated. A social network that accepts content notoriously hated by a significant portion of the public is not a good vehicle for investing capital or placing advertising.

But when the company is in the hands of a billionaire who does not sleep at the thought of losing some income, there are fewer beacons. The market is no longer able to induce a certain moderation. However, the existence of a frequentable discussion space depends on the ability to decide between lawful remarks and those that are not. This is why the laws must be upgraded in order to govern the separation between what is acceptable and what is not.

However, the debate on the ways to reinforce these capacities to distinguish the licit from the illicit is ultra-polarized. On the one hand, there are those who, in the name of a libertarian vision, brandish their right to utter racist, misogynistic or homophobic remarks and images and much worse. On the other hand, there are those who see hatred everywhere! For some, the slightest remark critical of religions, sexual identities or their political positions is classified as hate speech. Hysteria rages on both sides. We must go beyond this to design consistent rules with the principles recalled last Saturday to the new master of Twitter by the United Nations High Commissioner for Human Rights, Volker Türk. He notes that fundamental rights are the main victims of the weakening of capacities for moderation.

When we take it for granted that disturbing words or images can be censored at the slightest discomfort, we ignore freedom of expression. Similarly, when we brandish an absolutist vision of this freedom to claim the right to broadcast comments that violate laws such as those prohibiting racism or harassment, we ridicule it just as much. These two postures ignore that, in a democratic society, all rights and freedoms are likely to have reasonable and justifiable limits.

What laws?

But how to regulate discussion spaces on the Internet? To answer this question, the Public Policy Forum has held a citizens’ assembly over the past few months. Informed by expert opinions, the assembly identified the main measures to be put in place to ensure that the space of social networks becomes breathable again for all.

Thus, the report recommends that laws require online platforms to identify and manage the risks associated with the activities they enable. We also recommend the establishment of an independent body to develop and apply standards of good practices and a digital ombudsman to receive complaints and report periodically on the concerns of Canadians. These measures should take into account the diverse cultural contexts. Other measures are also recommended, such as the imposition of limits on the massive use of “robots” (bots) as well as investments in digital literacy.

The implementation of these recommendations assumes that States acquire resources to understand and regulate the algorithms and other processes involved in the operation of online platforms.

The effectiveness of the laws which protect against harassment, racism and other scourges is undermined by the obsolescence of the methods of intervention by the public authorities. Too often, state laws force intervention in a segmented way, while situations in networks ignore administrative divisions and other bureaucratic barriers. Within the same country, ministries find it difficult to cooperate. However, the effectiveness of the laws must not be slowed down by the finicky limits (between ministries, organizations, jurisdictions) inherited from the sectoral visions of the last century. States must become as efficient as Web companies!

It’s good to be alarmed at the abuses that can result from the control of a social network by a libertarian billionaire. But this must lead to actions to sustainably clean up virtual spaces.

To see in video


source site-41