The rise of artificial intelligence has unlocked a new frontier in fan culture: the creation and spread of deepfakes featuring celebrities and influencers. While many stars have publicly rejected this exploitation of their likenesses, the practice continues to thrive within online fandoms, driven by attention economies and a willingness to push boundaries.
The Incentives Behind AI Content
The core issue is simple: AI-generated content, including images and videos, is easy to produce and highly engaging. On platforms like X (formerly Twitter), where verified users can earn money through engagement, deepfakes have become a quick way to farm views and revenue. One fan account owner, speaking anonymously to The Verge, admitted that AI content is a “very quick way to get money,” despite widespread condemnation.
The situation isn’t limited to harmless edits. Celebrities like Ariana Grande and Grimes have publicly criticized the use of AI to create fake covers, deepfakes, and even sexually explicit content. Grimes, who initially encouraged fans to experiment with AI-generated music based on her voice, now calls for “international treaties” to regulate deepfakes due to the unsettling reality of having her likeness co-opted.
The Rise of Cameos and Viral Outrage
The launch of OpenAI’s Sora video generator, with its “Cameos” feature, has dramatically escalated the problem. Cameos allow anyone to offer their likeness for use in AI-generated content, leading to predictably offensive results that are nearly impossible to remove once online.
Influencer and boxer Jake Paul, an OpenAI investor, embraced the trend, with AI videos of him going viral—including portrayals relying on homophobic stereotypes. While some creators like Paul attempt to capitalize on the outrage, others scramble to distance themselves from the backlash.
The core problem is that even when platforms like X take down individual deepfakes, they reappear elsewhere almost immediately. The speed of proliferation makes effective moderation impossible.
Celebrity Reactions and Boundary Violations
Celebrities are caught in a double bind: they condemn deepfakes while simultaneously being prominently featured in them. Criminal Minds star Paget Brewster was recently tricked into apologizing to a fan account for assuming an image was AI-generated, only to discover it was real. This incident highlights the growing anxiety among celebrities who fear having their likeness exploited without consent.
The issue extends beyond simple edits. AI is increasingly used to generate sexually explicit deepfakes, with some creators even monetizing this nonconsensual content. The situation prompted X to temporarily disable searches for “Taylor Swift” after a wave of disturbing deepfakes went viral, but the images continued to spread on other platforms.
The Power Dynamic at Play
The normalization of deepfakes has shifted from fringe forums to mainstream platforms, and the underlying incentive remains the same: engagement. As one fan noted, “They almost want to do it more, because it’s causing people to be upset.” This suggests a disturbing trend where outrage is deliberately manufactured for profit.
The issue is exacerbated by the lack of effective legal recourse. Victims often face an uphill battle in taking down deepfakes, and some critics argue that legislation like the Take It Down Act may facilitate censorship without genuinely protecting individuals.
Ultimately, the exploitation of celebrity likenesses in AI deepfakes reflects a broader power dynamic where consent is often disregarded in the pursuit of engagement. Despite the pushback from celebrities and the growing awareness of the harm caused, the trend shows no signs of slowing down.
