IE 11 is not supported. For an optimal experience visit our site on another browser.

What we keep getting wrong about deepfakes like the fake Taylor Swift nudes

“Image-based sexual abuse” predominantly targets women and girls — and it appears to be exponentially increasing.

The hashtag #ProtectTaylorSwift started trending on X on Thursday after AI-generated nude images of the star flooded the internet. First uploaded to Telegram, the images were quickly reposted across social media, viewed millions of times, and on some platforms have yet to be deleted. In response, X appeared to disable searches for Swift’s name days after the images surfaced, but after cutting one-third of its content moderators over the last two years, this effort is too little, too late. It is easy for users to get around and it is unclear if it would be replicated for non-celebrities. While it is an important option for victims, making people unsearchable for indeterminate periods of time without their consent could contribute to the broader silencing effect that online harassment has on victims

The reality that anyone can be “virtually stripped” in seconds is a frightening prospect — but there is a lot we can do to prevent it. We can start by addressing the truth that lies beneath the technology: We don’t care enough about people’s — primarily women’s — consent. 

Last year, the number of fake nudes circulating online increased tenfold and became even more realistic, thanks to new artificial intelligence models trained on images of women scraped from the internet without their consent.

There’s a name for creating and distributing fake sexual images of people without their consent: “image-based sexual abuse.” It predominantly targets women and girls, and according to independent researcher Genevieve Oh, it’s exponentially increasing. Last year, the number of fake nudes circulating online increased tenfold and became even more realistic thanks to new artificial intelligence models trained on images of women scraped from the internet without their consent. When the fake sexually explicit images of Swift began circulating online, it was no surprise to activists who have been sounding the alarm about so-called “deep fake porn” since it went viral on Reddit in 2017.

The first wave of deep fake porn mostly targeted high-profile women, but today Swift’s teenage fans are as likely as she is to be deep faked. Schools across the United States are grappling with the rise of AI nudes created by children, who are also themselves at risk of grooming and sextortion from adults via AI-generated images. 

As well as causing serious harm, AI-generated nudes are tricky to regulate. Although a bill has been introduced and 16 states offer some legal protections, there is not yet a federal law banning AI-generated nudes. This leaves victims reliant on a patchwork of civil and criminal recourse through “revenge porn,” defamation, copyright or child pornography laws. With the worst offenders masking their IP addresses or hiding in the dark web, internationally joined-up laws are needed to curb the rise of sextortion scams, grooming and AI-generated imagery of child sexual abuse.

Our solutions to deepfakes often focus on distinguishing “real” from “fake,” but for most victims, that doesn’t help; either way, many victims report feeling like they have been sexually assaulted. Depression, sexual trauma and post-traumatic stress disorder are common experiences for both children and adult victims, who struggle to feel psychologically safe because their images can never be fully deleted. Some victims have lost their jobs, moved to the other side of the country and even changed their names, all while searching websites every day to see if the content has been reposted. The real harm here is the creation of sexual imagery without consent. Therefore our solutions must go beyond distinguishing real from fake, and distinguish consent from nonconsent

According to historian Jessica Lake, fake nudes date back to the early days of photography. Fast-forward 150 years and the technology has certainly changed, but people’s motivations haven’t. They create fake nudes to make money via scams like sextortion, to punish, control or manipulate others, to indulge in sexual fantasies, to bond with others, to fit in or simply because they think it’s funny and entertaining. What connects most of these motives is the view that people’s bodies can be used without their consent.

It’s impossible to scrub all nonconsensually created sexual content from the internet, but we can halt its spread by holding accountable the big tech companies that host, own or direct traffic toward the tools that are flooding the internet with deepfake porn. At the very least, Google and other search engines should proactively delist the video generators being used to create this content. Websites that host any sexual imagery should also run meaningful age and consent verification. As well as removing images and cracking down on forums where offenders gather, social media platforms should use preventative prompts to shape user behavior and proactively moderate repeat offenders. 

My research indicates that young people are passionate about protecting one another’s privacy, and we saw this play out when Swift’s fans jumped to her defense on X. We need to follow their lead by centering privacy, consent and equity in the regulation of AI nudes for the victims who don't have an army of fans rushing to their defense. Image-based sexual abuse disproportionately targets minoritized women and femmes, a pattern that is likely to persist with AI nudes. These are the lived-experience experts, and solutions must center their bodily autonomy.

Ultimately, if we focus on centering consent, we can improve AI tools in ways that protect both privacy and sexual expression.