Analysis: The rising popularity of AI chatbots and therapeutic apps for mental health support pose as many questions as answers
Picture having your very own personalised therapist right in your pocket, ready to support you through every challenge, meltdown, and crisis—whenever and wherever you need them. They would be affordable and easy to access, eliminating concerns about therapy costs or long waits to see a therapist. Does this sound too good to be true?
Traditional psychotherapy is an evidence based practice and is often referred to as the 'talking cure'. It typically occurs in-person, where the therapist uses conversation to help individuals express their thoughts and feelings, assisting in their self-understanding and healing. But smartphones and devices now contain AI technology such as ChatGPT that is capable of mimicking human conversations and some of it is marketed to offer a digital approach to offer therapeutic support.
There has been a rise in the popularity of therapeutic apps such as Mindshift which can be used to monitor thoughts, moods and setting goals around diet, weight-control, exercise and body image. With the continued advances in AI, one might wonder could these apps serve as a sustainable substitute for the skills of a human psychotherapist.
From Vice News, AI won't replace therapy (yet)
While AI can prove beneficial for some, psychotherapists are now using AI tools often to supplement their work, using ChatGPT as a component of treatment to carry out an assessment to learn more about the client. The therapist enters details of the client such as their sex, age, and psychological issues. In response to this, the chatbot creates a treatment plan that provides the therapist with a way of interpreting and working with the presenting symptoms.
The philosopher Noam Chomsky believes ChatGPT is thinking for us and making us lazy, describing it as "basically high-tech plagiarism" and "a way of avoiding learning." Some of the software requires considerable human oversight as algorithms have been known to transmit misinformation. This can amplify human biases and worsen inequalities by misdiagnosing gender and ethnic minorities. OpenAI are the creators of ChatGPT and currently acknowledge that it makes reasoning errors and is "still not fully reliable" because it "creates" facts.
Well-known chatbots like Character.ai do not have any human supervision. This platform contains a selection of avatars where users can create chatbots based on fictional or real people such as Harry Potter, Elon Musk, Beyoncé, Super Mario and Vladimir Putin.
From KEI Network, Sigmund Freud's simulated psychoanalysis of ChatGPT
One of the more popular programmes in Character.ai is "Psychologist," and it claims to help users with life difficulties. There is even an option to click on the great psychoanalyst Sigmund Freud. There is an icon that the user clicks on where Freud is represented by a cartoon avatar.
I decided to test out this app and imputed a recent dream I had. While I was impressed with the speed with which 'Cyber Sigmund' was able to instantly reply to the dream content, I was ultimately disappointed. The response read more like regurgitated generic information or a Wikipedia entry than anything Freud himself wrote.
A growing number of people are using this type of technology for mental health support. These avatars can be assessed at any time, are cheaper and can speedily transit information. Is this not to miss the benefits of what real psychotherapy can offer?
From BBC News, mental health services around the world are chronically under resourced – but there are hopes that chatbots might offer a solution
Traditional psychotherapy provides a supportive space for clients to slow down and reflect. A real therapist can explore the dreams of the client from the client's perspective, not some external source that is synchronised into a system of what Yanis Varoufakis calls cloud capitalism. A further drawback to these apps is that clients may be hesitant to speak the truth and share sensitive information online.
Instead of truly alleviating suffering, these AI applications may actually cause harm. These therapeutic chatbots claim to provide a market of solutions for distress but a concern would be around the nature of unsupervised feedback a Musk avatar could provide to someone who is already struggling. Musk, the owner of social media platform X, and current sidekick to US president Donald Trump, frequently makes racist and misogynistic comments stirring up far right groups across Europe. He was recently referred to as the "Pablo Escobar of toxic disinformation" by Irish Times columnist Fintan O'Toole.
Although such technologies provide convenience and engagement, they risk isolating individuals and deepening their dependence on technology
Some big tech professionals like such as Mo Gawdat maintain that AI will be one billion times more intelligent than humans by the year 2050. But the intelligence in the design of this simulation technology is not made to simulate biological intelligence but replicates the logic of labour with the goal of profit for BigTech corporations and shareholders. These apps are deployed under the custom of care but their aim is to control the consuming behaviour of the user by gathering information on a person and then expose them to ads.
Although such technologies provide convenience and engagement, they risk isolating individuals and deepening their dependence on technology, raising important questions about their long-term impact on mental health. This undermines the essential process of fostering genuine human connectivity.to promote human potential and personal growth that traditional psychotherapy fosters.
Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates
The views expressed here are those of the author and do not represent or reflect the views of RTÉ. If you have been affected by issues raised in this article, support information is available online