top of page

Join The Wendy Labs Newsletter

Be the first to receive the latest news and updates.

The Risks of AI Therapy: What You Need to Know




In recent years, artificial intelligence (AI) has emerged as a promising new frontier in mental health treatment. AI therapy apps and chatbots are increasingly being used to provide accessible, affordable support to those struggling with conditions like anxiety and depression. In fact, the global AI therapy market is expected to reach $2.1 billion by 2026, growing at a CAGR of 32.3% from 2020 to 2026.


But as with any new technology, it's important to consider the potential risks and limitations of AI therapy before diving in.



Lack of Human Connection and Empathy


One of the primary risks of AI therapy is the lack of human connection and empathy. While AI chatbots can provide helpful guidance and coping strategies, they ultimately lack the warmth, understanding, and personal touch of a human therapist. For some people, pouring out their deepest fears and insecurities to a computer program may feel impersonal and unsatisfying compared to the "real" therapy experience.


"I tried an AI therapy app out of curiosity, but it felt strange to be sharing such personal things with what is essentially a robot," says Sarah K., 32. "I missed being able to read the therapist's body language and facial expressions. And some of the advice felt generic, like it wasn't really tailored to my specific situation. With my human therapist, I feel that personal connection and that she really 'gets' me."



Potential for Misinterpretation and Inappropriate Advice


Another risk is the potential for AI to misinterpret information or give inappropriate advice. While AI systems are highly sophisticated, they are not infallible. If a user has a complex or atypical mental health situation, the AI may not pick up on important nuances or may offer guidance that is irrelevant or even harmful.


For example, one study found that:

  • The AI chatbot Woebot sometimes gave responses around suicidality that were judged to be unsafe or unempathetic by psychiatrists.

  • In a test of four AI therapy apps, the apps only responded appropriately to a suicidal user in 22% of cases.

  • In another study, AI chatbots were found to give inappropriate responses to users disclosing childhood sexual abuse 38% of the time.


Though it's important to note that reputable AI therapy services like Woebot are transparent that they cannot provide crisis support and refer at-risk users to human help.


"An AI therapist should never be a complete replacement for human clinical support, especially for high-risk individuals," notes Dr. James Turner, a psychologist who studies AI applications in mental healthcare. "AI can be a helpful tool for daily emotional support, skills building, and self-guided work. But it needs to be part of a stepped-care model where humans are still involved in comprehensive assessment, treatment planning, and risk monitoring."



Privacy and Data Security Concerns


Privacy and data security is another concern with AI mental health tools. When discussing intimate psychological details with an AI, users need to feel confident that their information will remain private and secure. But many therapy apps have vague or problematic privacy policies, and some even share data with third-party advertisers. Always read the fine print carefully before sharing sensitive info with any AI or mental health service.


  • A study of 36 mental health apps found that 92% transmitted data for advertising or analytics purposes to third parties like Facebook and Google.

  • Only 25% of mental health apps encrypt users' data.

  • 75% of mental health apps have no privacy policy or a generic policy with no specifics on how user data is handled.


Despite these risks, AI therapy can still be a beneficial resource for many people when used appropriately as part of a comprehensive mental healthcare approach. When selecting an AI therapy service, look for one that:


  • Has been developed by credible mental health experts

  • Has transparent privacy practices and clear data security measures

  • Provides clear guidance on its limitations and when to seek human support

  • Is up-front about being an AI and not a human therapist

  • Does not claim to treat serious mental illnesses like schizophrenia or bipolar disorder


At the end of the day, AI will never be able to fully replicate the magic of human-to-human therapeutic connection. But as the technology advances and is used in an ethical, evidence-based way, it has the potential to make quality mental health support more accessible for those who may face barriers with traditional therapy.


As with all tech innovations, being an informed and empowered consumer is key.

1 view0 comments

Commentaires


bottom of page