IE 11 is not supported. For an optimal experience visit our site on another browser.

AI-generated weapons of mass misinformation have arrived

It’s time to get serious about guarding against AI-generated misinformation.

Since the emergence of powerful artificial intelligence programs like ChatGPT, many people have focused on how far off AI was from achieving human-level “general intelligence” and how that might turn society upside-down. But AI experts warned that there were a number of other more urgent social threats around the corner that could be achieved without that capacity. In particular, as New York University emeritus professor Gary Marcus told me this year, bad actors could attempt to use generative AI to create weapons of mass misinformation.

That frightening prediction is already coming true.

It’s long past time to start worrying about whether people on the internet are unknowingly reading and sharing fake news generated by bots.

The latest data from media and misinformation watchdog NewsGuard identifies over 600 “unreliable AI-generated news and information sites” on the internet. According to the Washington Post, that's an increase of over 1000% since May. These sites look like conventional news and information sites, but upon closer examination NewsGuard finds that they show signs of operating with “little to no human oversight and publish articles written largely or entirely by bots.” They manufacture up to hundreds of articles on issues from politics to entertainment to technology. Some of these stories are spreading widely on social media and have even shaped news cycles. 

There are also AI operations running natively on social media platforms. NewsGuard has also found a network of over a dozen TikTok accounts that have used AI text-to-speech software to spread political and health misinformation in videos that have collectively garnered over 300 million views. One video ludicrously claims that former President Barack Obama murdered one of his former White House chefs, and the video deploys AI tech to conjure up a computer-generated “Obama” statement responding to the false story. 

It’s long past time to start worrying about whether people on the internet are unknowingly reading and sharing fake news generated by bots. It’s unclear who owns the identified sites and accounts, which demonstrate how political actors both at home and abroad can join the information wars at little to no cost. And the worst part is that it’s going to be a long time before people are likely to be appropriately vigilant about these operations. 

One example that NewsGuard fleshes out at length is instructive. A site called Global Village Space generated an article in November claiming that Israeli Prime Minister Benjamin Netanyahu’s psychiatrist had committed suicide after failing to make progress in his treatment of Netanyahu. The story is not just false, but absurd on its face; it resembles, according to NewsGuard, a rehashed version of a satirical article from 2010. (AI chatbots scrape information from the existing internet, and there are documented examples of their responses’ simply re-creating parts of existing news articles.) 

But it wasn’t presented as satire at Global Village Space, and the story was picked up on social media in several languages and relayed as a real news story by an Iranian news channel. While the story might outwardly sound preposterous, in Iran, an adversary of Israel, the false story naturally buttressed narratives that Netanyahu is mentally unstable. It’s a remarkable example of how AI can hatch conspiracy theories and prey on political polarization at extremely low cost. Unlike human-generated misinformation operations, these sites cut out almost all labor costs and can produce more fake information at an extraordinary pace. 

These sites appear to use advertising revenue to generate money, and one possible way to hinder them and disincentivize their creation is to find ways to get brands to not generate revenue for AI-generated websites. But while that might work against profit-oriented content farms, it’s not going to protect the public against purely political and state-backed misinformation operations. 

One practical solution is to inform the public about the perils of AI-generated information and train people in internet literacy through news articles, education projects and school curricula. Unless people learn to become critical news consumers who understand how to vet their sources, we’re entering a new era of informational warfare. At the moment, unfortunately, it looks like a lot of people are going to be caught flat-footed.