IE 11 is not supported. For an optimal experience visit our site on another browser.

The GOP’s latest attack on Biden isn’t scary. How it was made is.

The RNC's use of AI-generated imagery marks an unsettling first step toward a new world of manipulation.
President Biden Signs Emmett Till Antilynching Act
President Joe Biden during a ceremony in the Rose Garden of the White House on March 29, 2022.Samuel Corum / Bloomberg via Getty Images

Almost immediately after President Joe Biden announced Tuesday that he was running for re-election, the Republican National Committee launched an attack ad warning about the perils of a second Biden term. The speed of the response was not surprising. But there was something unusual and unsettling about the ad itself — specifically, the artificial intelligence behind it.

The RNC called the spot an “AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024” and touted it as the first ad completely based on AI imagery. That likely explains, in part, why the spot is so shoddy as a piece of messaging. But the clumsy execution masks something darker. This technology will become more sophisticated over time — and with that, the capacity to manipulate, confuse and misinform the public will grow too. 

While this video was a failure, future attempts may not be.

The ad features a set of “what if” scenarios based on Republican fears of Biden’s second term. Sirens blare as speculative images show China invading Taiwan: bombings, warships, angry nationalists marching in the streets. Then the ad asks, “What if financial systems crumble?” and cuts to images of boarded-up businesses and long lines of people. It goes on to show hypothetical images of “80,000 illegals” flooding the U.S. and San Francisco overrun by criminals and gangs — with troops deployed in the streets in response. The final image shows an AI-generated Biden slumped over his Oval Office desk and a voice that sounds computer-generated saying, “It feels like the train is coming off the tracks.” 

On one hand, this ad is classic GOP fearmongering about Democratic handling of the economy, immigration and foreign policy. But as my colleague Steve Benen has pointed out, the commercial inadvertently and implicitly admits Biden’s strength in that it features no attacks on Biden’s actual and, at this point, substantial policy record. Instead, the RNC speculates about what might happen, even though none of it has come close to happening in three years of his presidency. 

Part of the reason the RNC used this weak message was probably because it designed the ad around AI-generated images, in a bid to do something groundbreaking and zeitgeisty. GOP staffers likely fed prompts about right-wing nightmare scenarios to one of the growing number of sophisticated image generators and then tried — and failed — to find a compelling message to fit those images. In a reflection of the current stage of AI development, the images are technically impressive but not evocative nor memorable enough to land with viewers.  

While this video was a failure, future attempts may not be. The pictures themselves are high quality — every picture is clearly intelligible, and AI Biden looks like real-life Biden. As these image generators become more powerful, one can imagine future campaign ads using generated images and videos that look indistinguishable from real photos and videos. That, in turn, raises questions of what – if any – regulations will there be on political advertising that dupes people into thinking that imagined scenarios are real, or that a politician has committed some kind of act that they haven’t.

Below its ad on YouTube, the RNC disclosed that the video is AI-generated. But what if people don’t always read the fine print — or there isn’t any? And while the RNC is easier to hold accountable, outside groups won’t be. 

Given huge variations in internet literacy, it’s plausible that significant swathes of the public are vulnerable to being manipulated in ways they’re not even aware of. Even more worrying is that all these images can be produced at astonishing speed with minimal costs. AI experts like Gary Marcus, an emeritus professor of psychology and neural science at New York University, have warned that one of their main concerns is that the pace and volume of disinformation production could easily dwarf anything like what we saw during, say, Russia’s meddling in the 2016 election with social media trolls. Disinformation operatives can now put out more misleading information with a fraction of the labor and time it took just a year ago.

As Axios notes, the 2024 election is “poised to be the first election with ads full of images generated by modern Artificial Intelligence software that are meant to look and feel real to voters.” The GOP’s first foray isn’t too impressive. But it’s early days. A new door has been opened to disguising fiction as reality and portraying terrifying futures — a potent weapon in the hands of a reactionary political movement.