skip to main content

The hidden costs for your brain from using ChatGPT

AI apps on mobile phone including DeepSeek, Gemini and ChatGPT
'As with mobile phones, the benefits of large language models like ChatGPT come with hidden costs.' Photo: Getty Images

Analysis: With the rise and widespread use of large language models, it seems prudent to ask if their use will ultimately diminish our cognitive capabilities

Technology has always shaped how we think, behave and interact with the world, but the speed of this change has accelerated in recent decades,. Each new device promises to make life easier – and often they do – but it also changes us in subtle ways.

Take the mobile phone, for example. What began as a tool for communication has become an extension of ourselves helping organise our busy schedules, navigating our journeys and answering any questions we may have, to the point that Google has become our reflexive solution to any problem.

But such convenience comes at a cost. As our devices remember birthdays, phone numbers and appointment dates, the mental effort once spent recalling this information has been outsourced with the unwanted byproduct of weakening our memories.

From RTÉ Radio 1's Brendan O'Connor Show, can AI write better poetry than a poet?

A similar trade-off – convenience at the expense of cognition – is now emerging with the rise of large language models (LLMs). These systems, powered by vast datasets and deep learning algorithms, can generate text that reads as though written by a human, answering questions, explaining complex ideas and even offering advice with striking confidence. OpenAI's ChatGPT is the most well-known example of a such a model, with an estimated user-base of 800 million active users. Unlike earlier technologies that merely stored or retrieved information for us, LLMs go a step further: they help us to think - or at least they appear to.

Because these models can deliver rapid and seemingly accurate responses to almost any prompt, they are becoming woven into our daily work and personal lives. From summarising articles to providing recommendations to drafting emails, large language models can streamline a range of everyday tasks improving human efficiency and productivity. In doing so, they free us of much of the mental burden that fills our routines by taking on the small decisions and cognitive chores that once demanded our attention.

But, as with mobile phones, the benefits of LLMs come with hidden costs. Philosopher Richard Heersmink has warned reliance on these systems may have unintended negative consequences for how we think and write. Due to their exceptional ability to assist in a variety of tasks, such as writing emails, generating ideas or researching assignments, overrelying on LLMs to complete such tasks may inadvertently weaken these abilities. Just as a muscle weakens when it’s no longer exercised, cognitive skills, such as critical thinking and decision-making, can fade when we no longer engage in these tasks.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ News' Behind the Story podcast, the highs and lows of AI in 2025

Heersmink likens these concerns to those first voiced by the Ancient Greek philosophers. When writing first emerged, philosophers such as Socrates or Plato worried that committing ideas to paper would weaken our ability to remember things. Similar worries persist today; for example, research has shown how calculators have worsened our mental arithmetic skills over time, while GPS navigation devices have impaired our spatial memory and our ability to navigate without assistance.

As LLMs now begin to transform how we read, write and reason, it seems prudent to ask whether their use will ultimately diminish our cognitive capabilities. A recent study from Massachusetts Institute of Technology’s Media Lab offers an early indication of what this trade-off might look like.

In the study, college students who used ChatGPT to assist with researching and writing an essay showed lower brain activity than those who were unassisted or relied on standard search engines. Although participants found that using a LLM was more convenient and helpful when completing the assignment, the LLM users were less likely to question or critically evaluate its output, with this effect also carrying over to the four-month follow-up period. Additionally, when later asked to recite a quote from their essay, those who used LLMs to help write the essay were less likely to recall a correct quote compared to the other groups, suggesting that outsourcing the work reduced how deeply they processed and internalised the material.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Today with David McCullagh, is AI helping or hindering children's learning?

The impact of LLMs on human cognition has thus far been felt most acutely in higher education. Yet not everyone views their arrival as a threat. Prof Orla Shiels from Trinity College Dublin argues that AI tools like ChatGPT should be embraced by universities, which have the potential to make education more accessible and inclusive.

To guard against the risks these systems can have on our critical thinking skills, Shiels calls for clear policies on how AI should be used and acknowledged in academic work. She suggests that thoughtfully integrating such policies could help universities to balance innovation with integrity, encouraging students to use AI as a tool for learning rather than a shortcut for thinking. Ultimately, the goal should not be to shield students from AI, but to equip them with the ability to think critically and independently alongside it.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Drivetime, should you use AI for medical advice?

We are at a crossroads for the future of AI and LLMs. The latter are becoming increasingly widespread, with their skills improving on each iteration. However, their rise has prompted understandable scepticism about how they impact our own cognitive skills. Just as mobile phones have expanded our access to information, while subtly eroding our memory capabilities, LLMs offer extraordinary benefits, but pose similar risks from a cognitive standpoint. They have the capacity to expand our reach to information and make our work more efficient, but only if we learn to use them wisely.

Establishing clear norms and responsible use guidelines will be essential if we want to welcome this new technological era without surrendering our ability to think deeply and critically about the world on our own terms. The challenge should not be to resist new technologies, but to ensure that they augment our minds rather than gradually replacing our cognitive capacities.

Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates


The views expressed here are those of the author and do not represent or reflect the views of RTÉ