Meta’s Supervisory Board Asks It to Clarify Its Deep Fake Policies

Meta will have to clarify its policies regarding nonconsensual deepfake images, the company’s “oversight board” concluded Thursday, in a ruling on cases involving explicit depictions of two famous women generated by artificial intelligence (AI).

Meta’s quasi-independent committee said that in one case, the web giant failed to remove an intimate image of a famous Indian woman, whom it did not identify, until the company’s supervisory board was involved in the matter.

AI-manipulated images of naked women and celebrities, including Taylor Swift, have proliferated on social media as the technology used to create them has become more accessible. Online platforms are now under pressure to address the problem.

Meta created the Oversight Board in 2020 to act as an arbiter of content on its platforms, including Facebook and Instagram. The board spent months reviewing the two cases involving AI-generated images of famous women, one Indian and one American. The board did not identify either woman, describing them only as “female public figures.”

Meta said she welcomed the supervisory board’s recommendations and was reviewing them.

One of the two cases involved an “AI-manipulated image” posted on Instagram depicting a naked Indian woman, seen from behind, but whose face, when visible, resembled a “female public figure.”

The council said a user had reported the image as pornographic, but the report was not reviewed within 48 hours, so the case was automatically closed. The user then filed an appeal with Meta, but the web giant also automatically closed the case.

It was only after the user appealed to the oversight board that Meta decided its initial decision not to remove the image was a mistake. Meta also deactivated the account that posted the images and added them to a database used to automatically detect and remove those that violate its policies.

In the second case, an AI-generated image of naked American women being groped was posted to a Facebook group. The images were automatically removed because they were already in the surveillance database. A user appealed the removal to the oversight board, but the board upheld Meta’s initial decision.

The board said both images violated Meta’s ban on “sexualized and degrading retouched images” under its bullying and harassment policy.

The council added, however, that the wording of its policy was unclear to users and recommended replacing the word “degrading” with a different term such as “non-consensual.” It also recommended that Meta clarify that the policy covers a broad range of media editing and manipulation techniques beyond photo retouching.

The deep-fake nude images should also fall under Meta’s standards on “sexual exploitation of adults” instead of “bullying and harassment,” the board recommended.

When the council asked Meta why the Indian woman was not already in its database of prohibited images, it was alarmed by the company’s response that it was relying on media reports.

“This is worrying because many victims of fake intimate images are not known to the general public and are forced to either accept the dissemination of their non-consensual representations or to research and report each case themselves,” the council stressed.

He also expressed concern about Meta’s “automatic closure” of image-based sexual assault claims after 48 hours, saying this “could have a significant impact on human rights.”

Meta, then known as Facebook, created the oversight board in 2020 in response to criticism that it was not moving quickly enough to remove misinformation, hate speech and interference campaigns from its platforms. The 21-member board includes legal scholars, human rights experts and journalists from around the world.

To see in video

source site-39