While social networks are regularly accused of not sufficiently protecting minors online, Meta announced new measures on Instagram on Thursday to protect young people from blackmail with nude photos.
The American giant will thus set up a “nudity controller” on Instagram, set by default on minors’ accounts, which will automatically detect images with nudity received on the application’s messaging system and will blur them.
“In this way, the recipient is not exposed in an unwanted manner to intimate content and has the choice to see this image or not,” explains to AFP Capucine Tuffier, in charge of child protection at Meta France.
Awareness messages on photo blackmail of a sexual nature, also called “sextortion”, will be sent at the same time to the sender and receiver of the images, reminding them that this sensitive content can result in screenshots and transfers by malicious people.
“It’s about reducing the creation and sharing of this type of image,” summarizes Ms. Tuffier.
These new measures will be tested from May in a handful of countries in Central and Latin America before a global deployment in the coming months.
“It’s rather good that by default they provide the means to protect themselves from this type of aggression or harassment,” Olivier Ertzscheid, researcher at the University of Nantes in information sciences, told AFP.
According to him, “in view of the flow and volume of photographic content circulating on the platforms, we are obliged to go through automation which involves AI”.
Meta specifies that when an account has been identified by its artificial intelligence tools as potentially being the origin of this type of blackmail, its interactions with minor users will be strongly limited.
Human and AI moderation
A potential “criminal” account will therefore not be able to send private messages to a minor’s account, will not have access to its complete list of subscribers (the minors’ accounts being hidden) and the minors’ accounts will no longer appear in the search bar, details Capucine Tuffier.
Meta will also warn a young user if they have come into contact with a potential blackmailer. The minor will then be directed to a dedicated “Stop Sextortion” site and will have access to a telephone crisis line in partnership with associations.
“The question is why haven’t they done it until now? » asks Olivier Ertzscheid, who believes that this type of technology has already existed for several years. He also points out the risk of deviations and “false positives”, and takes the example of photos of women in swimsuits which, in the past, have already faulted these algorithms.
“These false positives have an impact because they establish new forms of normativity,” he explains, pushing social network users to restrict themselves in certain uses.
For him, the intervention of a moderator can avoid some of these pitfalls but “all these platforms today lack human moderation. »
Meta, accused in the United States and France of harming the mental health of adolescents, had already announced in January a first round of measures to better protect young users.
Among these, a minor user will now need explicit permission from their parents to change their account from private to public, access more so-called “sensitive” content or have the possibility of receiving messages from people they is not already following on the platform.
The European Commission has launched separate investigations into Meta, Snapchat, TikTok, YouTube on the measures implemented to protect the “physical and mental health” of minors.