IE 11 is not supported. For an optimal experience visit our site on another browser.

OpenAI is encouraging people to use ChatGPT for therapy. That’s dangerous.

Just because a chatbot can recite language that appears therapeutic doesn’t mean its empathetic.
Photo illustration of a couch with an 8-bit style chat bubble that reads "how are you feeling today?"
Chelsea Stahl / MSNBC

OpenAI, the company that created ChatGPT, recently announced that in the coming weeks it plans to roll out a voice recognition feature for its chatbot, which will make its artificial intelligence technology appear even more humanlike than before. Now the company appears to be encouraging users to think of this as an opportunity to use ChatGPT as a tool for therapy.

Lilian Weng, head of safety systems at OpenAI, posted on X, formerly known as Twitter, on Tuesday that she had held a “quite emotional, personal conversation” with ChatGPT in voice mode about “stress, work-life balance,” during which she “felt heard & warm.” 

“Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool,” she said.

OpenAI president and co-founder Greg Brockman appeared to co-sign the sentiment — he reposted Weng’s statement on X and added, “ChatGPT voice mode is a qualitative new experience.”

OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do.

This is a disconcerting development. That the company’s head of safety and its president are encouraging the public to think of a chatbot as a way to get therapy is surprising and deeply reckless. OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do — and that messaging could come at the expense of public health.

Weng’s language anthropomorphized ChatGPT by talking about feeling “heard” and “warm,” implying the AI has an ability to listen and understand emotions. In reality, ChatGPT’s humanlike language emerges from its ultra-sophisticated replication of language patterns that draws from behemoth databases of information. This capability is robust enough to help ChatGPTS users conduct certain kinds of research, brainstorm ideas and write essays in a manner that resembles a human. But that doesn’t mean it’s capable of performing many of the cognitive tasks of a human. Crucially, it cannot empathize with or understand the inner life of a user; it can at best only mimic how one might do so in response to specific prompts.

Seeking therapy from a chatbot is categorically different from prompting it to answer a question about a book. Many people who would turn to a chatbot for therapy — rather than a loved one, therapist or other kind of trained mental health professional — are likely to be in a mentally vulnerable state. And if they don’t have a clear understanding of the technology they’re dealing with, they could be at risk of misunderstanding the nature of the guidance they’re getting — and could suffer more because of it.

It’s irresponsible to prescribe ChatGPT as a way to get therapy when these still nascent language learning models have the capacity to persuade people toward harm, as many AI scientists and ethicists have pointed out. For example, a Belgian man reportedly died by suicide after talking to a chatbot, and his widow says the chat logs show the chatbot claiming to have a special emotional bond with the man — and encouraging him to take his own life.

There are also questions about the harm that some users could experience even if they’re not at risk of suicidal ideation. Some mental health professionals have acknowledged that ChatGPT could be useful in a limited sense for people dealing with certain kinds of mental health challenges, in part because dispensing some styles of therapy, such as cognitive behavioral therapy, are highly structured. ChatGPT has also been known to rebuff requests for diagnosis and recommend professional care. But we also know that chatbots like ChatGPT regularly “hallucinate,” confidently stating false claims — and this has obvious implications for the value and risks inherent in any advice it dispenses. ChatGPT could deviate randomly and unpredictably from therapeutic treatment norms in the feedback it offers, and the user would have no idea. A chatbot can also falsely claim to know things about the nature of the world that it doesn’t. If users are not aware of the technology’s shortcomings when they’re using it, they’re at risk of being manipulated in harmful ways. We already know that even the most rudimentary chatbots — such as the ELIZA program created in the 1960s by Joseph Weizenbaum— have easily duped people into think that there’s an understanding human behind them. With something as sophisticated as ChatGPT, the need to clarify what it is not capable of is particularly urgent.

All this is to say nothing of the reality that getting therapy from a chatbot is necessarily going to be shallow when compared with therapeutic interventions by humans. Chatbots don’t know what humans are, they don’t have bodies, they don’t have emotional intelligence, they cannot assess moral dilemmas, they don’t have wisdom. Encouraging people to use chatbots to get therapy presents an opportunity cost, because it potentially reroutes people from getting therapy through humans who can offer sustained, nuanced feedback that's based on an actual intellectual and emotional connection.

Sadly, the reality is that many people may use ChatGPT and other chatbots anyway, if only because it’s criminally difficult to access mental health care in this country. There were already reports of users turning to ChatGPT for therapeutic purposes long before the advent of its voice recognition feature, and an AI option will generally be more appealing to people with less time and money. Insofar as many of those who experiment with chatbots for therapy will have limited resources for proper ongoing care, the very least that companies like OpenAI can do is to prominently advertise the huge limitations and potential dangers of the tech to users.

Instead, we’re dealing with the opposite. Weng admits to “never” trying therapy but has determined that her chat with OpenAI about work is “probably it.” It’s an appropriate articulation of intellectual hubris by a company whose head has confessed to holding “an absolutely delusional level of self-confidence.” 

A number of people who work in AI technology keep mistaking AI for a human proxy, rather than as a new tool with distinct assets and flaws. The cost of overestimating AI and using it to make false promises about therapy is that a lot of people could have their time wasted — or even be harmed — in the process.