(Ottawa) Because “profits should not take precedence over safety,” the federal government wants to tighten the screws on social media platforms and pornographic sites, in particular to protect children from online harm. Barely having tabled Bill C-63, Justice Minister Arif Virani already sees an avalanche of criticism on the horizon.
Minister Virani finally tabled Bill C-63 on online harm on Monday, which has been awaited for years, and which promises to fuel debate in the House of Commons over the coming weeks and months. .
“Right now, it’s too easy for social media companies to look the other way as hate and exploitation proliferate on their platforms. This bill will force them to do their part and protect people from harm and exploitation,” he told a press conference in parliament.
“If they fail to do so, there will be a significant price to pay,” the minister said.
Under the legislation, sites will be required to “reduce exposure to harmful content,” and will therefore be required to quickly remove two categories of content: “sexual victimization of children or perpetuating the victimization of survivors” and “intimate content communicated in a non-consensual manner”.
Reports would come from users, directly on the online service, or by filing a complaint with the Digital Security Commission which would be set up thanks to the bill. Companies would have a maximum of 24 hours to remove this type of content.
Pornographic sites like Pornhub, where this type of content abounds, would not be forced to verify the age of Internet users to prevent access to minors, as proposed in Bill S-210 by Senator Julie Miville-Dechêne .
Online services will, however, have to implement “special protections for children”. In its information document, the government cites the establishment of safe search parameters for children, or the automatic deactivation of certain features such as web cameras.
Delinquent platforms are exposed to hefty fines, which can reach “the highest amount between that equal to 6% [de leurs] overall gross revenues” and $10 million. The determination of the amount depends in particular on the nature and scope of the violation or the history of the suspected site.
The regulatory process will clarify many aspects arising from this bill.
Among the package of measures contained in Bill C-63 is the creation of new entities, namely the Digital Security Commission, made up of five members appointed by the government, as well as the Digital Security Ombudsman.
Government officials were not able to specify the budget attached to the establishment of these new entities during the technical information session which was organized for media representatives on Monday.
Hate content: new offenses
The Minister of Justice also wants to create a new hate crime offense in the Criminal Code. This would apply to all offenses provided for in the Criminal Code, we read in the government information kit.
He also proposes increasing the maximum penalties for the four hate propaganda offenses. The sentence for glorifying genocide, for example, would increase from five years to life in prison, while that for other hate crimes could reach five years instead of two.
Private and encrypted messaging services are excluded from Bill C-63.
“Money and influence”
A first version was tabled by the Liberals on June 23, 2021, on the last day of work before the summer break, but it died on the order paper. Its scope was broader, and conservatives had warned of the risks of censorship.
“I want to clarify what the bill does not do: it does not infringe on freedom of expression. It strengthens freedom of expression by allowing all people to participate safely in online debates,” argued Minister Virani.
“We know that powerful organizations and individuals are likely to be quick to criticize the bill. People with money and influence. My message to these organizations and these people is very simple: it is time to work directly with us,” he continued.
New Democratic MP Peter Julian indicated that his party intended to present amendments to the government bill, particularly with regard to algorithms, which C-63 does not sufficiently address, according to him.
Harmful content falls into seven categories:
- content depicting the sexual victimization of children or perpetuating the victimization of survivors;
- content intended to intimidate a child;
- content that encourages a child to harm themselves;
- content inciting violent extremism or terrorism;
- content inciting violence;
- content that promotes hatred;
- intimate content communicated in a non-consensual manner, including hyperfaking.