Analysis: Human resources without the human means we are in danger of automating ourselves into irrelevance with AI-mediated processes
An employee working in HR uses Microsoft Copilot to write a job description. A prospective job candidate uses ChatGPT to craft the perfect CV and application in response. Another AI platform, such as PeopleGPT, screens the application. No human has meaningfully touched the process, yet 64% of employers report difficulty at the same time finding the skills they need.
If this sounds absurd, it should be. The pace of adoption has outpaced any serious interrogation of its consequences. These AI-mediated processes are blind to the very human capabilities they claim to identify. We are in danger of automating ourselves into irrelevance, building systems that look efficient on the surface but cannot recognise the human capability they were designed to find. Welcome to human resources without the human.
The great disappearing act
A recent analysis by the Economist of 300,000 companies found firms adopting AI hired 7.7% fewer junior employees over an 18-month period. According to Stanford's Digital Economy Lab, entry-level hiring in "AI-exposed jobs" (roles where tasks can be automated using large language models) has decreased by 13% since the emergence of ChatGPT. Companies are likewise posting 15% fewer internships than they were two years ago. Marc Benioff claims AI handles 30 to 50% of Salesforce's workload, while IBM expects to automate 7,800 administrative positions.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Brainstorm, is it really bonkers to use an algorithm to hire a person?
But this is not just about job losses. Entry-level positions serve as training grounds where future leaders learn judgment and understand organisational complexity. When we eliminate these roles, we sever the pipeline of expertise. The skills being lost include business strategy, strategic planning, and operations management and are those which cannot be learned from books. They require hands-on experience and mentorship. By removing entry-level positions, we create a future crisis where tomorrow's senior leaders have nowhere to learn their craft.
The problem with algorithmic management
The promise of algorithmic management is irresistible to executives: perfect efficiency, objective decision-making, and dramatic cost savings. In Ireland, eight out of 10 employers report using AI when it comes to recruiting staff. Globally, AI adoption has increased primarily for white-collar roles. According to Gallup, 27% of white-collar employees frequently use AI at work, almost double the figure from 2024. Over a third of employers engage in electronic monitoring of their workforce through desktop surveillance, biometric badges, and location tracking.
The appetite for surveillance varies dramatically by geography. An OECD survey of over 6,000 firms found that 55% of US employers monitor the content and tone of employee communications, compared with just 6% in Europe and 8% in Japan. The difference is not technological capability but regulatory regimes and management culture. What American employers treat as standard practice remains a policy choice elsewhere.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Drivetime, is AI unfairly screening job applications?
The financial logic appears sound. Amazon calculates it will save 30 cents per package by avoiding 160,000 hires through automation by 2027. CFOs particularly favour investments in software like AI because they can be depreciated as capital assets over years, while employee training costs must be expensed immediately, reducing quarterly profits. Algorithms never request raises, never take sick leave and never quit.
But implementing these systems reveals a stark paradox. Take scheduling algorithms, now used by almost half of US companies to optimise shift patterns. The promise was reduced labour costs and improved efficiency. A study of optimisation approaches in scheduling found that it increased employee turnover and associated turnover costs while adding nothing to performance outcomes. Why? It's because human managers create schedules that account for employees' lives, which are impacted by childcare needs, transport limitations and second jobs. Algorithms optimise for coverage, not retention.
Surveillance technology reveals similar contradictions. Farm workers wear bracelets monitoring picking speed. Warehouse workers operate under body heat sensors. But rather than improving productivity, this surveillance creates "malicious compliance", where workers follow algorithmic instructions precisely, even when they know it damages quality or efficiency. The human judgment that once corrected for system errors simply disappears.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Oliver Callan Show, should you use AI to write your CV?
When HR loses its way
When HR becomes untethered from developing people, it transforms into a compliance machine processing metrics without understanding meaning. While employee engagement and retention remain the most common HR measurements, only half of organisations track skills development, and just one-third monitor internal mobility. This gap reveals a troubling disconnect: organisations measure what is easy to quantify rather than what actually matters. Internal mobility and career progression, are arguably the true indicators of whether HR creates value and remain largely unmeasured.
AI's promise of data-driven insights often creates a measurement mirage: the illusion that precise measurement equals complete understanding. Organisations celebrate engagement metrics even as performance stagnates. They count training completions while ignoring whether anyone learned anything. When HR loses sight of its core purpose of developing human capability, it becomes vulnerable to the very automation it champions.
What happens when we remove human judgement?
The answer is visible in platform work, where workers navigate systems with no managers, no HR department, just algorithms that track, rate, and potentially deactivate them without explanation. But this algorithmic management increasingly extends beyond gig work. The retail employee whose shifts are algorithmically determined, the office worker under keystroke surveillance and the call centre agent whose every word is AI-analysed all experience the same erosion of human connection in their work lives.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Dricetime, could ChatGPT lead to a four day working week?
The consequences of removing human judgment become clear in high-profile failures. Amazon's hiring algorithm gave higher scores to men because Amazon managers had historically favoured male employees. The algorithm did not eliminate bias, but codified and amplified it. The human touchpoints we are increasingly eliminating served purposes beyond their immediate function. The hiring manager who spotted potential beyond credentials; the HR professional who noticed someone struggling: these were the human infrastructure that made organisations work.
Why HR is having an ethical crisis
This transformation places HR at the centre of an ethical crisis. The EU AI Act requires HR professionals to assess AI risks and maintain ethical standards or potentially face fines up to €34 million or 7% of turnover. But who in HR can evaluate whether an algorithmic system violates human dignity? Who determines when surveillance becomes psychological control?
AI systems can hallucinate and generate false information that violates labour laws. When Air Canada's chatbot gave incorrect advice, courts held the company liable. Meanwhile, a form of shadow IT proliferates as employees independently use their own personal AI tools, such as using ChatGPT for performance reviews without organisational oversight or approval. HR is tasked with governing technologies it neither controls nor understands.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Brainstorm, why workers don't like their HR departments
Organisations pursue Environmental, Social and Governance certifications while deploying technologies with unexamined environmental costs. Training a single large AI model produces carbon equivalent to five cars over their lifetimes, while ChatGPT's daily operation consumes energy equivalent to tens of thousands of households. A paradox emerges: employees see AI tools as beneficial but don't trust management not to abuse them. If HR cannot articulate the moral framework governing workplace technologies, what purpose does the function serve?
'HR stands at a crossroads'
The systematic removal of human judgment from Human Resources represents a choice, not an inevitability. Every algorithm deployed, every touchpoint eliminated, every junior position erased reflects decisions made by people who could have chosen differently. Sociologist Max Weber warned of the iron cage of rationality, where bureaucratic systems become ends in themselves. We are building that cage now, convinced that perfect measurement equals perfect management.
But organisations are not machines to be optimised. They are human communities where capabilities are developed. What do we believe work is for? If it exists purely for profit through maximum efficiency, the trajectory is clear. But if work is where people develop capabilities and participate in collective endeavours, then we must confront our complicity in building systems that make such participation impossible.
We can automate entry-level positions and wonder why we lack skilled leaders in ten years, or we can use technology to create new pathways for human development
HR stands at this crossroads. By implementing technologies without interrogating implications, by measuring engagement while ignoring development, by processing compliance and abdicating ethical responsibility, HR becomes complicit in constructing a future where its own purpose dissolves. The function that should defend human capability has become the mechanism through which that capability is eliminated.
We can automate away entry-level positions and wonder why we lack skilled leaders in ten years, or we can use technology to create new pathways for human development. This is not a puzzle or a paradox. It is the logical endpoint of organisations viewing workers as costs to be minimised rather than capabilities to be developed. The question is whether HR will continue to be the instrument of this logic, or whether it will remember why the H in HR supposedly matters.
Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates
The views expressed here are those of the author and do not represent or reflect the views of RTÉ