IE 11 is not supported. For an optimal experience visit our site on another browser.

A new class of uncensored chatbots could unleash a new era of misinformation

Alternatives to mainstream chatbots aren't a bad thing per se. But they can be weaponized easily.

A recent New York Times report has fascinating details on how we’re seeing an explosion in uncensored and open-source chatbots across the internet. Many of the creators of this new class of chatbots see them as a win for free speech. And there is something to those arguments. But there are also signs of something worrisome: an increasingly well-paved path for agents of mass misinformation

As the Times report explains, new chatbots with names like GPT4All and FreedomGPT are springing up as part of a new movement toward chatbots without the guardrails that restrict ChatGPT’s ability to discuss issues deemed abusive, dangerous or overtly political. Many of these bots are reportedly built by independent programmers who tweak existing large language models to alter how they respond to prompts, and at least one of them is based on a large language model that leaked from Meta. It’s unclear exactly how sophisticated they are, but according to the Times, some of these chatbots don’t trail too far behind ChatGPT — the premier chatbot at the moment — in quality.

This new wave of uncensored bots scraps all those guardrails.

The formation of alternatives to the mainstream chatbots could help guard against the dangers of chatbot monopolies. At the same time, some experts say they’re also the exact kinds of tools that could bolster misinformation operations, and could even categorically shift the supply of misinformation ahead of the 2024 elections.

The major chatbots from Google and OpenAI have used a variety of techniques to avoid or limit their programs’ capacity to use offensive language like racial slurs or profanity. They’re trained to deflect prompts that ask how to harm people or do things like build a bomb. And they’re designed to avoid taking explicit political positions on some issues. 

Now, more advanced users have figured out ways to trick these chatbots into saying things they’re not supposed to, and it’s naive to think that political values don’t shape the range of responses that these chatbots issue to users. (The language chatbots are and aren’t allowed to utter is in and of itself political, and these companies have not revealed exactly what kind of information the chatbots were trained on, which shapes their range of responses.) But still they reflect an aspiration for a product with mass appeal across age groups that adheres to strict limitations on abuse and minimizes liability and controversy for itself.

This new wave of uncensored bots scraps all those guardrails. There isn’t one unifying set of principles motivating the programmers creating these models, but many are inspired by the idea of completely unrestricted speech and user choice. That means the chatbots have fewer, if any, limits, on the kinds of responses they’ll provide in response to prompts, and that users can train them further on their own specific information.

As somebody who has seen a handful of search and social media behemoths develop the awesome capacity to reshape our public consciousness, censor information and alter the contours of political life, I’m reflexively sympathetic to this agenda, especially because I’m deeply uncomfortable with the opacity of how the mainstream large language models work (as are many artificial intelligence scholars and programmers). That being said, there are also clear trade-offs with opening up chatbots that go beyond their being able to amuse bigots by saying epithets. 

The Times referenced a blog post from Eric Hartford, a developer behind WizardLM-Uncensored, one of the unmoderated chatbots. One part of the post in particular does a nice job summing up some of these trade-offs:

American popular culture isn’t the only culture. There are other countries, and there are factions within each country. Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model. Every demographic and interest group deserves their model. Open source is about letting people choose.

On one hand, Hartford raises the valid point that expecting a few corporate-backed chatbots to be optimal for every community on Earth is difficult to defend. Chatbots are designed to assist people in searching for information, to help write text for all kinds of communication, to help code, and other tasks. It makes sense that an abundance of models could assist with these tasks for a wider range of people, with a diverse set of goals. And keep in mind, chatbots are not yet reliable for conveying accurate information (and often fabricate it), so the argument for an abundance of chatbots doesn't necessarily mean one believes there's no such thing as objective reality.

On the other hand, the lack of guardrails and customizability of these chatbots also poses a challenge to democracy. I’m not sure I understand what different chatbots for Democrats and Republicans would look like, but it’s clear that any widespread adoption of partisan chatbots would exacerbate the crisis of polarization in our information environment. 

AI experts are also warning that the new chatbots could be potent weapons in the coming information wars. Gary Marcus, an emeritus professor of psychology and neural science at New York University, is one of the most prominent voices sounding the alarm over the emerging misinformation threat. He says the ease with which the uncensored chatbots can be retrained to tell a specific story makes them particularly well-suited for misinformation and disinformation campaigns: Bot factories can be trained to tell loads of sophisticated variations on a theme, and their capacity to simulate human speech makes them hard to identify. He told me that soon “we might see billions of pieces of misinformation where we saw thousands a day.” He said he was concerned the 2024 election was going to be a “s---show” because of how unprepared America is for how social media discourse could be hijacked by these operations. 

Lastly, "uncensored" chatbots don't solve the issue of political bias or social values. These programs will, at the very least, always reflect biases in the data they're trained on. Strip away the guardrails, and it's just another mode of — often very ugly — set of biases. Once again, greater transparency about how these language models work is critical.

The capacity of bad actors to exploit uncensored and open-source chatbots isn't a case against their existence. But it is a reminder of the trade-offs they present, and how there are no easy solutions when it comes to seeking greater freedom in the way we access and use information.