Meta’s Oversight Board has called on the company to revise its regulations on sexually explicit deepfakes. This recommendation was made following two cases involving AI-generated images of public figures. Although the specific individuals involved were not disclosed, the incidents sparked important discussions around the handling of such content by Meta.
One of the cases involved a nude image of an Indian woman that was initially posted on Instagram. Despite being reported to Meta, the post was automatically closed after 48 hours, and a subsequent user appeal met the same fate. It wasn’t until the Oversight Board intervened that the company finally took down the image, going against its initial decision to keep it up.
The second case featured an AI-generated image of a nude woman being groped by a man, which was shared in a Facebook group dedicated to AI art. Meta promptly removed the post due to its internal system detecting previously reported images. The Oversight Board supported Meta’s decision to remove the post in this instance.
Both cases highlighted the need for Meta to update its rules regarding sexually explicit deepfakes. The Oversight Board emphasized that the current language used in the company’s policies may not adequately address the issue of non-consensual explicit images created or manipulated using AI technology. The board recommended clearer guidelines to prohibit such content and improve the reporting process for users.
Additionally, the Oversight Board criticized Meta’s practice of automatically closing user appeals, citing potential human rights implications. While the board acknowledged the lack of detailed information to make a specific recommendation on this matter, it raised concerns about the impact on users.
The prevalence of explicit AI images, particularly in the form of “deepfake porn,” has raised significant concerns about online harassment. The US Senate recently passed a bill targeting explicit deepfakes, allowing victims to pursue legal action against creators of such content. This legislative development underscores the growing recognition of the serious consequences of deepfake technology.
This isn’t the first time the Oversight Board has urged Meta to update its policies on AI-generated content. A previous case involving a manipulated video of President Joe Biden led to Meta revising its approach to labeling AI-generated content. The board’s consistent pressure on Meta reflects the ongoing challenges posed by rapidly advancing technology in the digital age.