(Ottawa) A new federal report says cyberthreat activity targeting elections is on the rise around the world — and is now more likely to be seen during Canada’s upcoming federal vote.
The Canadian Center for Cyber Security’s report reveals that in 2022, just over a quarter of all national elections around the world were subject to at least one reported “cyber incident.”
The Canadian Center for Cyber Security, part of the Communications Security Establishment, has found that state cyberthreat actors with ties to Russia and China continue to be behind most cyberthreat activity targeting foreign elections since 2021.
Cyber threat activities by Russia and China include attempts to carry out “distributed denial of service” attacks against the websites of electoral bodies, access to voters’ personal information or electoral information, and analysis of vulnerabilities on electronic voting methods.
The center warns, however, that cyberthreat actors are getting better at covering their tracks and that most cyberthreat activity targeting elections remains unattributed.
The report also highlights the emerging phenomenon of generative artificial intelligence (AI), which can produce various types of content, including text, images, audio and video, known as “hyperfakes.”
“This synthetic content can be used as part of influence campaigns to secretly manipulate online information and, at the same time, influence the opinions and behaviors of voters,” we read in the report.
“Despite the possible creative benefits, the ability of generative AI to pollute the information ecosystem with disinformation threatens democratic processes everywhere. »
However, in most cases, it is not clear who is behind the disinformation generated by artificial intelligence, the report adds.
“However, we believe it is very likely that foreign adversaries or hacktivists will use generative AI to influence voting in Canada’s upcoming federal election. »
Cyber threat actors are already using this technology to pursue strategic political objectives abroad, the report notes. For example, pro-Russian actors used generative AI to create a fake of Ukrainian President Volodymyr Zelensky “announcing” his country’s surrender after the Russian invasion.
“We assess that it is very likely that the ability to generate hyperfakes exceeds our ability to detect them,” warns the Canadian Center for Cyber Security. Current publicly available detection models have difficulty distinguishing fake content from real content. »