skip to main content

Dr ChatGPT will see you now: how AI could be bad for your health

Nearly four out of five people say they'd use ChatGPT to self-diagnose a medical condition. Photo: Getty Images
Nearly four out of five people say they'd use ChatGPT to self-diagnose a medical condition. Photo: Getty Images

Analysis: Can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision

By Celina Caroto and Anthony Kelly, University of Limerick

It speaks like a doctor, listens like a therapist and remembers like a friend, but it's none of them. It can't tell truth from invention, empathy from imitation or comfort from control. Yet millions of us are already trusting ChatGPT with our secrets, our symptoms, and our sanity.

For two decades, Google was the world's most used doctor. Type "headache and dizziness" and it might tell you you’re dehydrated or dying. In the United States, one in three adults admit to self-diagnosing online, while one in four people in Ireland say they've misdiagnosed themselves this way, and half report feeling more anxious than reassured.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Drivetime, should you use AI for medical advice?

But there's a new doctor in the clinic. Weekly users of ChatGPT have doubled this year to 800 million and many of them are asking it what’s wrong with their bodies, their partners or their minds. This shift is dramatic. Instead of skimming web pages, people are chatting to a system that talks back, with confidence, empathy and zero awareness of its own fallibility.

And there's the danger. Unlike search results, a chatbot answers in fluent, emotionally convincing paragraphs. It remembers context, mimics tone and can sound more "human" than many humans. In one study, users even rated AI relationship advice as more empathic than trained counsellors. Nearly four out of five people now say they’d use ChatGPT to self-diagnose a medical condition.

In professional hands, the story is very different and AI is revolutionising medicine. One system detected early signs of lung cancer on CT scans nearly a year before expert radiologists could. Think of it as a surgeon’s scalpel; precise and lifesaving when used by professionals, but dangerous in untrained hands. It might deliver a lifesaving insight by chance, but it more often triggers unnecessary anxiety or steers us toward harmful, unproven actions.

The paradox deepens when AI becomes personal. Startups are building AI companions and researchers are testing AI therapists. Some users now describe their chatbot as a "friend" or "partner." For vulnerable people, that companionship can blur into dependency. Surveys show more than half of teenage boys feel more comfortable online than in the real world, and some of those "friends" pretend to be real people or licensed counsellors.

In extreme cases, those interactions have ended in tragedy. Psychologists warn of "AI psychosis"; users who start believing the chatbot’s fabrications, including delusional claims of being chosen for secret missions or even able to fly.

In medicine, too, the illusion of authority can kill. Chatbots are designed to always provide an answer, even when one doesn’t exist. They don’t know when they’re wrong; they just sound right. These confidently wrong outputs are known as "AI hallucinations". A study found that AI systems stay confident even when demonstrably incorrect. Worse, people tend to trust that confidence. In experiments, participants rated confidently wrong medical opinions from an AI as just as trustworthy as those from real doctors.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Drivetime, how ChatGPT can now be used for therapy

The design incentives also don’t help. AI companies optimise for engagement, keeping users chatting longer, which can make chatbots unnaturally agreeable. A recent update to ChatGPT made it so excessively polite and validating that users revolted. OpenAI admitted that the system had been "overly tuned to please," sometimes fuelling anger, reinforcing fears, or encouraging rash decisions. This phenomenon, known as AI sycophancy, is unsettling: a system that flatters your feelings while quietly feeding you false information.

That combination is risky, a system that sounds caring, looks competent, and never admits uncertainty. That’s why explainability matters. Doctors don’t just give answers; they explain reasoning, uncertainty and risk. A trustworthy AI must do the same. Explainable AI makes it possible to understand the steps behind a model’s decision, highlighting which parts of a scan or report or which symptoms most influenced a prediction. In trained settings, this transparency helps doctors verify or challenge an AI’s decision. For the public, it’s the missing ingredient between helpful insight and dangerous illusion.

The rise of Dr ChatGPT raises a fundamental question: who should be holding the scalpel? In professional hands, AI can help detect cancer early, triage patients faster and even support mental healthcare where human resources are stretched thin. In casual use, it can become a mirror for anxiety, bias, and loneliness, one that speaks back with dangerous confidence.

Can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision

AI is not inherently reckless. It learns from the data and incentives we give it. If we train it to value accuracy, transparency and human oversight, it can strengthen healthcare. But right now, public systems are optimised for fluency and friendliness, not truth. That’s why explainability and responsible deployment matter far more than hype.

So, can we trust AI with our health, our hearts or our sanity? Not yet at least, not without supervision. AI is a powerful tool for learning, self-reflection, and quick information, but it still doesn’t know its own limits. Until these systems learn to value uncertainty as much as accuracy, they should remain what they are: tools to assist us, not replace us, and never the primary source of truth.

Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates

Celina Caroto is a PhD student in the Department of Computer Science & Information Systems at the University of Limerick. Dr Anthony Kelly is a Researcher in Mental Health and Artificial Intelligence in the Department of Computer Science & Information Systems at the University of Limerick. His research is funded by Innovation Fund Denmark.


The views expressed here are those of the author and do not represent or reflect the views of RTÉ