Analysis: The answer isn't just about how AI works, but how our brains perceive risk and trust
By Paul Jones, Aston University
From ChatGPT crafting emails, to AI systems recommending TV shows and even helping diagnose disease, the presence of machine intelligence in everyday life is no longer science fiction. Yet, for all the promises of speed, accuracy and optimisation, there's a lingering discomfort. Some people love using AI tools. Others feel anxious, suspicious, even betrayed by them. Why?
The answer isn't just about how AI works, but it's about how we work. We don't understand it, so we don't trust it. Human beings are more likely to trust systems they understand. Traditional tools feel familiar: you turn a key, and a car starts. You press a button, and a lift arrives.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's News At One, is the AI bubble about to burst?
But many AI systems operate as black boxes: you type something in, and a decision appears. The logic in between is hidden. Psychologically, this is unnerving. We like to see cause and effect, and we like being able to interrogate decisions. When we can't, we feel disempowered.
This is one reason for what's called algorithm aversion. This is a term popularised by the marketing researcher Berkeley Dietvorst and colleagues, whose research showed that people often prefer flawed human judgement over algorithmic decision making, particularly after witnessing even a single algorithmic error.
We know, rationally, that AI systems don't have emotions or agendas, but that doesn't stop us from projecting them on to AI systems. When ChatGPT responds "too politely", some users find it eerie. When a recommendation engine gets a little too accurate, it feels intrusive. We begin to suspect manipulation, even though the system has no self.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Drivetime, why China's new AI app is a game-changer globally
This is a form of anthropomorphism, attributing humanlike intentions to nonhuman systems. Professors of communication Clifford Nass and Byron Reeves, along with others have demonstrated that we respond socially to machines, even knowing they're not human.
We hate when AI gets it wrong
One curious finding from behavioural science is that we are often more forgiving of human error than machine error. When a human makes a mistake, we understand it. We might even empathise. But when an algorithm makes a mistake, especially if it was pitched as objective or data-driven, we feel betrayed.
This links to research on expectation violation, when our assumptions about how something "should" behave are disrupted. It causes discomfort and loss of trust. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased outputs or recommending something wildly inappropriate, our reaction is sharper. We expected more. The irony? Humans make flawed decisions all the time. But at least we can ask them "why?"
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's The Business, what are the jobs that AI will not replace?
For some, AI isn't just unfamiliar, it's existentially unsettling. Teachers, writers, lawyers and designers are suddenly confronting tools that replicate parts of their work. This isn't just about automation, it's about what makes our skills valuable, and what it means to be human.
This can activate a form of identity threat, a concept explored by social psychologist Claude Steele and others. It describes the fear that one's expertise or uniqueness is being diminished. The result? Resistance, defensiveness or outright dismissal of the technology. Distrust, in this case, is not a bug – it's a psychological defence mechanism.
Craving emotional cues
Human trust is built on more than logic. We read tone, facial expressions, hesitation and eye contact. AI has none of these. It might be fluent, even charming. But it doesn't reassure us the way another person can.
This is similar to the discomfort of the uncanny valley, a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is almost human, but not quite. It looks or sounds right, but something feels off. That emotional absence can be interpreted as coldness, or even deceit.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ News' Behind the Story, can you trust everything you see online?
In a world full of deepfakes and algorithmic decisions, that missing emotional resonance becomes a problem. Not because the AI is doing anything wrong, but because we don't know how to feel about it.
It's important to say: not all suspicion of AI is irrational. Algorithms have been shown to reflect and reinforce bias, especially in areas like recruitment, policing and credit scoring. If you've been harmed or disadvantaged by data systems before, you're not being paranoid, you're being cautious.
This links to a broader psychological idea: learned distrust. When institutions or systems repeatedly fail certain groups, scepticism becomes not only reasonable, but protective.
Telling people to "trust the system" rarely works. Trust must be earned. That means designing AI tools that are transparent, interrogable and accountable. It means giving users agency, not just convenience. Psychologically, we trust what we understand, what we can question and what treats us with respect.
If we want AI to be accepted, it needs to feel less like a black box, and more like a conversation we're invited to join.
Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates
Paul Jones is Associate Dean for Education and Student Experience at Aston Business School, Aston University. This article was originally published by The Conversation.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ