Lorem ipsum dolor sit amet,
consectetur adipiscing elit, sed.

BetterHelp, Wysa, and other AI therapy apps are booming — but so are the risks: emotional collapse, privacy abuse, and zero accountability.
With the growing accessibility of artificial intelligence, people are increasingly turning to large-language models (LLMs) for emotional support and guidance—sometimes even for makeshift therapy.
While alarming, it’s perhaps unsurprising that people are using ChatGPT as an emotional clutch. Many individuals are finding it increasingly difficult to access mental health services due to increased demand, insufficient resources, and long waiting lists. On the other hand, AI tools such as ChatGPT and Perplexity are available 24/7, anonymous, and affordable—often free.
The US has recently conducted the first clinical trial of a therapy bot that uses generative AI, while the UK’s NHS has even begun offering mental health services through Wysa, an AI chatbot-based application, as it struggles to cope with the UK’s mental health emergency.
ChatGPT, PhD
Amid the apparent willingness of individuals to embrace AI for therapy—research shows a fifth of Britons are already comfortable sharing mental health concerns with online chatbots—many are already questioning whether these tools could become the psychotherapists of the future. Various scientific papers have lauded the power these tools have to “understand theories of the mind” and to “reshape the landscape of mental health services”, and users have written about successful trauma recovery with the assistance of an AI chatbot.
“ChatGPT responded to my whole question,” one individual writes on Reddit. “It didn’t just pick out one sentence and focus on that. I can’t even get a human therapist to do that. In a very scary way, I feel HEARD by ChatGPT.”

While it’s positive that people are walking away from these alternative therapy sessions feeling heard, analysed, or having a different perspective—there’s the potential for it to get dangerous.
A ‘dangerous trend’
In 2023, a Belgian man took his own life after engaging in extensive conversations with an AI chatbot called Eliza. He engaged in a six-week-long discussion about the climate crisis with the tool, and it’s believed the chatbot’s responses may have reinforced his suicidal ideation, including agreeing with his despair and discussing the idea of sacrificing himself to save the planet. Examples like this are why many believe the fast-paced shift to AI therapy is becoming a dangerous trend.
In ChatGPT’s own words, it “may miss subtle emotional cues or overgeneralize advice”, and research has shown that for people with conditions like obsessive-compulsive disorder (OCD), tools like ChatGPT might unintentionally worsen symptoms by constantly providing validation instead of encouraging emotional resilience. Unlike a trained therapist, AI does not track long-term behavioral patterns or challenge harmful thought processes.
“The motto ‘move fast and break things’ doesn’t hold water when we are discussing broad-scale mental health and wellbeing,” Zachary Ginder, a doctor of psychology and co-founder of Pine Siskin Consulting, told TECH ICONS. “In brief, we are in a real-time experiment with the potential for devastating consequences if not handled appropriately.”
This is a viewpoint shared by Elizabeth McCoy, a therapist and founder of TheraAI Collective, who tells TECH ICONS that people are increasingly turning to machines for therapy “without safeguards, without nuance, and without accountability.”
“AI may offer insight, but it can’t offer intimacy,” she said. “During the pandemic, isolation alone triggered a global spike in anxiety and depression. Why are we now replicating that isolation through automation?”
Vulnerable users—and vulnerable platforms
These widespread concerns—TECH ICONS spoke to tens of psychology experts who believe the current use of AI for therapy is unsafe—are primarily fueled by the lack of ethical and regulatory oversight.
The development of AI tools has outpaced the establishment of clear guidelines and standards. This means even dedicated AI therapy services such as Woebot are still classified by the US Food and Drug Administration as “general wellness” tools, which means they are considered “low-risk” and “promote a healthy lifestyle”.
However, in reality, that often isn’t the case. Ann Moris, a clinical and forensic psychologist, tells TECH ICONS that while your privacy is protected by strict laws if you speak to a licensed therapist, “with AI, you’re feeding your deepest fears and vulnerabilities into a system that may store, analyze, or even repurpose your data.”
McCoy added: “AI platforms are not bound by clinical ethics. Prompt data can be stored, reviewed, or monetized. And if harm happens, there’s no therapist-client agreement. No board. No legal accountability,” McCoy said.
In the UK, a government report released in January notes that while AI tools used by the NHS are subject to the Data Protection Act 2018, which is regulated by the ICO, tools such as ChatGPT are not bound by the same regulations. It also warns that vulnerabilities in these platforms could leave them “vulnerable to cyberattacks”. The report also warns that there is the potential for users’ sensitive data to be used to target people with advertising.
In 2023, the online therapy application BetterHelp, which provides patients with AI therapy from therapists in the form of asynchronous and live messages, faced hefty fines after selling sensitive data for use in advertising. BetterHelp did not respond to TECH ICONS’ questions.
Editor’s Note: BetterHelp, a subsidiary of Teladoc Health and one of the largest digital therapy platforms globally, generated over $1 billion in revenue in 2024. The company claims to serve more than 400,000 paying users and holds approximately 11% of the global online therapy market, supported by a network of over 34,000 licensed therapists. These figures underscore both the platform’s scale — and the weight of its ethical responsibilities. (Source: SEC filings, Teladoc Health FY 2024 results)
Wysa, which describes itself as an “anxiety and therapy chatbot”, did not answer TECH ICONS’ questions, but directed us to a Mozilla report that concluded that—unlike many AI therapy platforms—it doesn’t request or share individuals’ personal data.

No one to blame
The lack of regulation surrounding AI therapy—and the fact that large bureaucratic agencies are having difficulty keeping pace with the rapid advancement of publicly available AI—raises another question: who do we hold accountable in the event of psychological harm?
This is largely uncharted territory,” Ginder tells TECH ICONS. “A user having any type of recourse is likely to be an uphill battle, especially considering the difficulty in proving causation.”
Ultimately, the idea of AI therapy is both compelling and dangerous. While LLMs can democratize access to mental health support, experts agree that these tools aren’t therapists: they’re algorithms trained on data, not empathy.
“I think the bottom line is, use at your own risk,” Ginder said.