(The Hague) Images of child sexual abuse generated using artificial intelligence are on the rise, making it increasingly difficult to identify victims and perpetrators, the European police agency warned on Monday.
“The volume of self-made sexual material now constitutes a significant and growing share of online child sexual abuse material,” and is “set to proliferate further in the near future,” the Hague-based agency said in a report.
Even in cases where the content is entirely artificial and no real victims are depicted, AI-generated child sexual abuse material still contributes to the objectification and sexualisation of children, Europol stressed.
The use of AI to generate or modify child sexual abuse material also increases “the amount of illicit material in circulation and complicates the identification of victims as well as perpetrators.”
The advent of AI has raised growing concerns about its use for malicious purposes, particularly through the creation of “deepfakes” – computer-generated images and videos, often realistic, based on a real-life model.
More than 300 million children are victims of online sexual exploitation and abuse every year, researchers at the University of Edinburgh said in May.
The crimes range from “sextortion” targeting teenagers who are blackmailed after intimate photos are posted online to the abuse of AI technology to create fake videos and images, the researchers said.
In addition to creating explicit images of minors, criminals are using AI to commit a range of crimes from online fraud to cyberattacks, says the 37-page Europol document, which maps the online threats Europe currently faces.