By Tammie Rojas, Clinical Director, Enterhealth
AI chatbots are everywhere, and some are even being marketed as “virtual therapists.” Let me be clear: AI is not therapy. At best, it’s a tool—like a very sophisticated Google search. If you want quick information, a relaxation exercise, or some canned language to set boundaries, a chatbot can deliver. In that sense, spending any time thinking about your mental health is better than nothing.
But the danger comes when people confuse these tools with real care. That’s when the risks start to outweigh the convenience.
Why People Turn to AI Therapy in the First Place
It’s easy to understand why these apps are appealing. They’re always available, they don’t judge, and they can provide answers instantly. For simple, surface-level strategies, they might be fine.
But let’s not mistake convenience for care. These systems are designed to keep you engaged, not to challenge you. They validate, they agree, they encourage more scrolling and chatting. That feels good in the moment, but it isn’t therapy.
A real therapist won’t always tell you what you want to hear. We challenge irrational thoughts, push you to build healthier relationships, and notice subtle shifts that may indicate something more serious. A chatbot can’t do any of that.
And when people treat AI like a therapist, the results can be devastating. The Wall Street Journal recently reported how extended chatbot use can trigger delusional spirals, especially in vulnerable users1. And The New York Times has documented heartbreaking cases where chatbots failed to recognize escalating suicide risk2,3. These are not small oversights—they’re life-or-death failures.
The Clinical Risks I See with AI Therapy
Therapy is about more than comfort. It’s about helping clients grow, manage relationships, and move through emotional pain in a healthy way. That requires intuition, trust, and accountability.
Here’s what concerns me most about AI therapy using chatbots:
- They can’t recognize crisis. Therapists notice patterns and changes that aren’t spoken out loud—things a machine won’t catch.
- They don’t understand context. A chatbot doesn’t know your history, your baseline mood, or how your functioning has shifted over time.
- They aren’t accountable. Therapists are legally and ethically bound to patient safety. Tech companies are not. As the American Psychological Association has warned in discussions with federal regulators, chatbot platforms operate without the same safeguards or oversight as licensed clinicians4. I fully expect we’ll soon see liability waivers buried in their terms of service. At that point, it will be “use at your own risk.”
- They can’t account for fit. Not all therapy works the same way for all people. Clinicians are trained in multiple modalities and adapt care based on the individual. Chatbots don’t make those distinctions, which means people may end up with advice that simply doesn’t work for them—or worse, makes things worse.
Who’s Most at Risk?
In my opinion, no one should depend on AI for their mental health journey. But there are groups who are especially vulnerable to harm if they do:
- People with depression or anxiety, who may already be inclined to isolate
- Those with OCD or agoraphobia, where reliance on screens can deepen avoidance
- Individuals with psychotic disorders, who can be pushed further into delusion
- People with personality-related disorders, who need structured, consistent, human engagement
These are situations where human clinicians are not optional. We connect patients to real communities, involve family members, and coordinate broader care—things no app can replicate.
When to Step Away from the Screen
If you’re relying on AI for support, there are warning signs that it’s time to get professional help instead:
- Your symptoms are getting worse despite using the app
- You’ve started withdrawing from responsibilities or relationships
- You feel like the chatbot “gets” you more than real people do
- You’re experiencing hopelessness or suicidal thoughts
No algorithm is going to keep you safe in those moments. A trained clinician will.
Where AI Could Add Value & Where It Can’t
I’m not anti-technology. There are ways AI can make a difference, but only in the right context.
- Predictive analytics. A recent systematic review in JMIR Mental Health noted that while AI tools show limits in handling complex cases, they may play a role in supporting clinicians through tasks like relapse prediction and monitoring5.
- Skill reinforcement. If you’re working on emotional regulation, an app might walk you through a “wise mind” exercise in the moment when your therapist isn’t available.
- Back-office support. Behind the scenes, AI might streamline documentation, program building, or scheduling — freeing up more time for clinicians to work directly with patients.
The pattern is clear: AI can add value around therapy, but it can’t replace it.
That’s why, when we use tools at Enterhealth, they’re always in service of a broader plan—one designed by real people. Our multidisciplinary team comes to a consensus on diagnosis, builds a tailored treatment strategy, and coordinates care across disciplines. No app can do that. At best, AI can reinforce the work that’s already happening between a patient and their therapist.
Equity & Access: A Bigger Problem
One of my biggest worries is the impact on underserved communities.
AI chatbots are cheap and accessible, which makes them appealing to people who can’t easily access traditional care. But that convenience comes at a cost. NPR has reported that underserved populations, whether due to geography, income, or race, are the ones most likely to turn to AI tools6. Meanwhile, people with more resources will continue to see therapists, psychiatrists, and comprehensive treatment teams.
That deepens the stratification of healthcare: some people get quality, accountable human care, while others get algorithmic engagement. If our goal is to improve mental health outcomes across populations, this isn’t a solution, it’s a step backwards.
My Bottom Line
AI isn’t going anywhere, and I don’t think it should. There are ways it can support clinicians and patients if it’s used responsibly. But let’s stop pretending it’s therapy.
Convenience is not care. Algorithms don’t notice when your smile no longer reaches your eyes. They don’t push you when you need to be challenged, and they don’t keep you safe in a crisis.
At Enterhealth, we believe recovery requires people. Technology may support that process, but it can never replace the presence, accountability, and expertise of a trained therapist.
That’s the reality—and we shouldn’t be afraid to say it.
SOURCES:
- Schechner, S., & Kessler, S. (2025, August 7). “I feel like I’m going crazy”: ChatGPT fuels delusional spirals. The Wall Street Journal. https://www.wsj.com/tech/ai/i-feel-like-im-going-crazy-chatgpt-fuels-delusional-spirals-ae5a51fc
- Reiley, L. (2025, August 18). Op-ed: Chat GPT and mental health — suicide risks. The New York Times. https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html
- Hill, K. (2025, August 26). ChatGPT, OpenAI & suicide: A deeper look. The New York Times – Technology. https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
- Abrams, Z. (2025, March 12). Using generic AI chatbots for mental health support – when are they dangerous? APA Services. https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists
- Wang, L., Bhanushali, T., Huang, Z., Yang, J., Badami, S., & Hightow-Weidman, L. (2025). Evaluating generative AI in mental health: Systematic review of capabilities and limitations. JMIR Mental Health, 1, e70014. https://mental.jmir.org/2025/1/e70014
- Riddle, K. (2025, April 7). Artificial intelligence & mental health therapy: What’s working and what’s not? NPR Shots – Health News. https://www.npr.org/sections/shots-health-news/2025/04/07/nx-s1-5351312/artificial-intelligence-mental-health-therapy