Meta’s platforms showed hundreds of “nudify” deepfake ads, CBS News investigation finds

X

Meta has taken action against a series of advertisements promoting “nudify” apps—AI tools capable of creating sexually explicit deepfakes using real photographs—after a CBS News investigation revealed their widespread presence on the company’s platforms.

In a statement to CBS News, a Meta spokesperson said, “We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps.”

The investigation uncovered numerous ads on Instagram, particularly within its “Stories” feature, where the technology was marketed with statements like “upload a photo” and “see anyone naked.” Some ads even invited users to upload videos to manipulate real people’s appearances. Notably, one promotional ad featured the text, “how is this filter even allowed?” beneath an example of a nude deepfake.

Among the concerning promotions were deepfake images featuring actors Scarlett Johansson and Anne Hathaway, presented in overtly sexualized contexts. Some advertisements directed users to sites offering the ability to animate real people’s images performing sexual acts, with some applications charging between $20 and $80 for exclusive features. Other links led users to Apple’s app store, where similar apps were available for download.

An analysis of Meta’s ad library revealed an alarming number of such ads across various platforms, including Facebook, Instagram, and Threads, as well as the Meta Audience Network, which allows advertisers to reach users on partnered mobile apps and websites. Data indicated these ads primarily targeted men aged 18 to 65 in regions including the United States, European Union, and United Kingdom.

The company has acknowledged the challenge of controlling AI-generated content. “The people behind these exploitative apps constantly evolve their tactics to evade detection, so we’re continuously working to strengthen our enforcement,” the Meta spokesperson explained. Despite the removals, CBS News noted that some “nudify” ads remained accessible on Instagram, highlighting the ongoing problem.

Deepfakes are digitally altered images, audio, or video that misrepresent individuals by making them appear to say or do things they did not. Recently, President Trump signed the bipartisan “Take It Down Act,” designed to compel websites and social media platforms to take down deepfake content within 48 hours of notification from a victim. While the law prohibits the publication of intimate content without consent, it does not address the tools used to create such material.

Both Apple and Meta have outlined policies banning adult nudity and sexual activity in advertisements. Meta’s guidelines state that ads must not feature “nudity, depictions of people in explicit or sexually suggestive positions,” and it also prohibits “derogatory sexualized photoshop or drawings.” Apple has similar restrictions against offensive content within its app store.

Alexios Mantzarlis, who leads the Security, Trust, and Safety Initiative at Cornell University’s tech research center, has tracked the rise of AI deepfake advertisements on social media platforms over the past year. He expressed frustration with Meta’s apparent lack of urgency in addressing the issue. “I do think that trust and safety teams at these companies care. I don’t think, frankly, that they care at the very top of the company in Meta’s case,” he stated. Mantzarlis pointed out that while platforms like Telegram and X may lack adequate regulation, Meta appears to have the resources to combat these challenges more effectively.

Mantzarlis also noted that “nudify” deepfake generators are readily available on both Apple’s App Store and Google Play, underscoring the difficulty in enforcing content restrictions. “The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification,” he said. He called for a coordinated industry effort to address apps that advertise nudification tools online.

Concerns about user consent and online safety, especially for minors, are rising sharply as a result of these findings. CBS News highlighted a specific “nudify” website promoted on Instagram that lacked any age verification process, enabling easy access for users. Similar reports have echoed the issue, with a December segment of 60 Minutes revealing that even popular AI-driven websites failed to implement effective age checks, allowing underage users to upload photos.

The findings reveal a troubling trend: a significant portion of underage teenagers have engaged with deepfake content. A 2025 study by the children’s protection nonprofit Thorn indicated that among teenagers, 41% were familiar with the term “deepfake nudes,” with 10% knowing someone personally affected by such imagery.

X
Xavier Banks
Xavier reports on startups, markets, and the tech economy. A fintech expert, he breaks down innovation and trends with clarity and analytical depth for all readers.

Leave a Reply

Your email address will not be published. Required fields are marked *