A Therapist in Every Pocket: Why Access Isn’t Everything

A Therapist in Every Pocket: Why Access Isn’t Everything



A Therapist in Every Pocket: Why Access Isn’t Everything

In March 2025, researchers published the results of the first randomized controlled trial for an “expert-fine-tuned” generative artificial intelligence-powered chatbot for mental health treatment.[1] The researchers behind Therabot claim their study provides preliminary evidence that it can be effective at treating symptoms of major depressive and generalized anxiety disorders. Therabot joins a growing list of AI-based interventions designed to treat mental illness in adults and children, several of which are now FDA-approved.[2,3] Digital applications for mental health care range from mood trackers to gamified cognitive behavioral therapy to chatbots like Therabot that emulate human therapists.

Therapy chatbots rely on large language models—a form of AI that uses deep learning techniques, specifically transformer architectures, to analyze patterns in language and create coherent, contextually relevant responses that simulate conversations with a human therapist.[4] The capabilities of these models range from simple chatbots that respond only to the comments provided in a single session, to more sophisticated models that “remember” patients through retrieval-augmented generation techniques. Some therapy bots supplement their conversations with reflection or mindfulness-based exercises that mirror the sort of homework a human therapist might set you between sessions. Given the significant barriers to accessing mental health care, these technologies have been hailed as a means of addressing the current mental health crisis by addressing both access issues and creating more personalized—and hence higher quality—care for those who seek it.[5,6]

Reactions to therapy bots range from “finally, something that will work for me,” to “who cares if it’s a person or a robot?” to mild discomfort or downright horror at the prospect.[7] We often want to be able to say that technologies, including clinical technologies like Therabot, are either “good” or “bad”. But thinking through the ethics of such technologies is complicated because, more often than not, these tools are neither good nor bad, or maybe a little of both. It’s all in how we use them.

In addition to concerns about potential harms to users associated with their use and whether—at this early stage of development and deployment—it’s possible to market these tools in a way that respects people’s autonomy, therapy bots raise ethical questions about social relationships, control and atrophy, bias, explainability, transparency, data security, privacy, and responsibility and accountability, to name but a few of the issues put forward in the existing literature.[8-11] Two ethical issues that have received limited attention in the debate over therapy bots are the potentially perverse incentives driving app development, and how an app-based market reinforces individualistic notions of health responsibility.

The digital therapeutics industry is worth an estimated $7.8 billion USD.[12] In 2020 alone, over 90,000 new digital health apps were released.[13] Approximately 20,000 of those are for mental health.[14,15] Clicks drive downloads, and downloads drive success. More downloads means more users, which can mean more support for more people and more therapeutic “success”. But it can also mean more market success—more industry attention, sponsorship and investment deals, invitations to speak in a variety of forums, for app developers and those who partner with them. Perhaps this is the trade-off: succumbing to market-driven models of healthcare delivery if it means more people can access beneficial interventions that they need. Some individuals experience a decrease in depressive and anxiety symptoms because of therapy bots, but at what cost? And who really benefits (here, profits) most from the ‘app-ification’ and related commodification of mental health support?

The migration of mental health into the digital marketplace also has the potential to reinforce the idea that it is the sole or primary responsibility of individuals qua consumers with easy access to relatively low-cost, personalized interventions to take control of their mental health. Attitudes towards health in general, and depression and anxiety, are already highly moralized. Models of health responsibility premised on the idea that individuals are, and should be, responsible for what are perceived as failures to manage preventable, socially undesirable conditions (think about how we view fatness or addiction) divert attention away from addressing the broader forces and structures that contribute to such conditions and related stigma. If it’s as simple as jumping on your phone for a quick session with your therabot between meetings or while you wait for your kids’ soccer practice to wrap up, why haven’t you done it already? Why are you still feeling so down and stressed out?

At the risk of contributing to the individualistic model of responsibility I just criticized, I want to remind those who choose to use therapy bots that it’s okay to still struggle with your mental health. Even if it only takes a couple of taps on a phone screen, a session with a therabot may not be the right approach or best “fit” for everyone. And it certainly won’t address the factors contributing to the rise in reported depression, anxiety, and stress within society. Improving individual access to interventions is only one part of creating supportive, sustainable systems of mental health care. It’s not all on you.



Source link

Recommended For You

About the Author: Tony Ramos

Leave a Reply

Your email address will not be published. Required fields are marked *

Home Privacy Policy Terms Of Use Anti Spam Policy Contact Us Affiliate Disclosure DMCA Earnings Disclaimer