“I only believe what I see”, this phrase attributed to Thomas, disciple of Christ, seems more outdated than ever. Indeed, the boundary between reality and falsehood has never been as narrow as with the emergence of artificial intelligence (AI). This new technology has enabled an exponential proliferation of hyperfakes (deepfakes).
“Deepfakes are manipulated images or videos that are false, but appear authentic and credible,” recalls the author. The majority of hyperfakes are of a sexual nature. In fact, more than 90% of deepfakes online show non-consensual intimate videos or images of women.
Today, this kind of content can be easily made from our smartphones. This text is therefore intended to be a cry of alarm against this growing phenomenon and the lack of adequate protection and legal recourse for women victims of sexual manipulation.
The first time I heard about edited footage was in 2018, when a video of President Barack Obama was circulating online showing him insulting his successor, Donald Trump, calling him an idiot. The realism and quality of the video made it credible.
This experience highlights how easy it can be to be fooled by this technology. If the manipulation of videos of political figures already questions our democratic system and our ability to differentiate truth from falsehood, the use of deepfakes to create non-consensual sexual content poses a much greater threat to millions of women.
In recent years, several women have seen their privacy violated and their most personal images distributed without their consent using deepfakes. This scourge affects all spheres of our societies, even in our secondary schools, where very young girls find themselves victims of this violation of privacy.
Just recently, during the Super Bowl, we all witnessed artificial intelligence-generated pornographic images of singer Taylor Swift flooding the internet. This particular case highlights the fact that no woman, regardless of her social status, is safe from these attacks.
The consequences of this sexual hyperfaking are significant. This can lead women to social isolation, depression and sometimes suicide. American-Canadian feminist video blogger Anita Sarkeesian sums up these consequences by saying: “Hyperfakes are used as a weapon to silence women, degrade them, exercise power over them, thus reducing us to sexual objects. It’s not just about fun and games. It can destroy lives. »
But what must be done to stem this phenomenon? In Canada, and particularly in Quebec, we are faced with the urgency of adopting legislative and preventive measures to stem this scourge and provide support to survivors. This involves, firstly, raising awareness, particularly among young people. The role of parents and schools must be to make them informed digital citizens, fully aware of the dangers and real consequences of these technologies.
Second, Quebec should adopt legislation similar to British Columbia’s Intimate Images Protection Act. This law allows survivors to bring civil actions in court to quickly remove intimate images generated by fakes — an option not directly offered by the Criminal Code and privacy laws. It also allows authors and web giants to be sued for damages.
British Columbia is the fourth province to have adopted such a law, after Prince Edward Island, Saskatchewan and New Brunswick. Explicitly referring to hyperfakes in the legislation marks an important development in protecting against the non-consensual distribution of intimate images, tackling the challenges posed by this technology.
Bill C-63 on online harm, currently debated in the House of Commons, moves in this direction at the federal level. Indeed, if adopted, it would require web giants to provide clear and accessible means to report such content and to block users who distribute this content, in addition to guaranteeing their removal within 24 hours. There is still time to act.