Child abuse images deleted from AI database

Artificial intelligence researchers announced Friday that they have removed more than 2,000 web links to images of suspected child sexual abuse from a database used to train popular AI image-generation tools.


The LAION Search Database is a massive online index of images and captions that has been a source for major AI image creators such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found that it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic, deep-fake images depicting children.

That December report prompted LAION, which stands for Large-scale Artificial Intelligence Open Network, a nonprofit, to immediately delete its dataset.

Eight months later, LAION said in a blog post that it had worked with Stanford University’s watchdog group and anti-abuse organizations in Canada and the United Kingdom to resolve the issue and publish a cleaned-up database for future AI research.

Stanford researcher David Thiel, author of the December report, praised LAION for its significant improvements. He argued that the next step is to remove from distribution the “corrupt models” that are still capable of producing child abuse images.

One of the LAION-based tools that Stanford identified as the “most popular model for generating explicit images” — an older, slightly leaked version of Stable Diffusion — remained easily accessible until Thursday, when New York-based Runway ML removed it from the Hugging Face AI model repository. Runway said in a statement Friday that this was a “planned deprecation of research models and code that were not actively maintained.”

The cleaned-up version of the LAION database is being released as governments around the world are taking a closer look at how certain technology tools are being used to create or distribute illegal images of children.

The San Francisco city attorney filed a lawsuit in early August seeking to shut down a group of websites that allow the creation of AI-generated photos of naked women and girls.

The alleged dissemination of images of sexual abuse of minors on the messaging app Telegram is among the elements that led French authorities to file a complaint on Wednesday against the founder and CEO of the platform, Pavel Durov.


source site-55