skip to main content

Why every click you make at work could now be training your replacement

Woman on laptop using mouse (Photo by Marek Levák on Unsplash)
Employers are cutting human roles while pouring billions into the AI tools that automate them, and using the workforce that remains to make those tools more capable. (Photo by Marek Levák on Unsplash)

Analysis: AI technology is moving faster than regulation and the power imbalance favours the employer, yet the coordinated use of rights can help workers push back

"Dystopian". That was the word multiple Meta employees reached for in internal messages last week, after the company began rolling out a new tracking tool on US staff laptops. According to Reuters and CNBC, software called the Model Capability Initiative captures their mouse movements, clicks, keystrokes and periodic screenshots as they work across hundreds of websites, including Google, LinkedIn, Wikipedia, GitHub and Slack. US staff cannot opt out. Concerns raised in internal forums include the accidental capture of passwords, immigration status, and personal health information.

Surveillance at work is nothing new. Henry Ford had stopwatch-wielding supervisors timing his Highland Park workers to a tenth of a second in 1913, and the methods have multiplied ever since, from call centre listening in the 1990s to the algorithmic tracking of taxi drivers and food delivery workers today. What sets Meta apart is the purpose behind this. The data is being collected to train AI agents that can then carry out white-collar work on their own, which means the workers being watched are training the very systems built to replace them.

The timing underscores this point. In the same week the surveillance story broke, Meta announced 8,000 job cuts as part of more than 20,000 layoffs across the tech sector, all framed as efficiency gains needed to fund continued investment in AI. Employers are cutting human roles while pouring billions into the AI tools that automate them, and using the workforce that remains to make those tools more capable. Every click, every keystroke, every screenshot is now raw material for the systems being built to take the work over.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Oliver Callan, Dex Hunter-Torricke on the good, the bad, the ugly of AI

Why is workplace surveillance becoming more prevalent?

The shift to remote and hybrid work after 2020 fuelled an explosion in workplace surveillance. Microsoft's Productivity Score, Hubstaff, Teramind, Veriato and dozens of similar vendors offer "boss-ware" that tracks active hours, keystrokes per minute, idle time, application use, and screenshots taken at random intervals. A 2023 Resume Builder survey found that 96% of US firms with remote staff used some form of monitoring software. In the UK, the Trades Union Congress reported that three in five workers had been subject to surveillance in their current or most recent job.

From a financial perspective, the economic logic is hard to argue with. A licence for a piece of monitoring software costs a fraction of a middle manager's salary. But a second, more powerful incentive has now arrived in the form of an AI training data shortage. The major AI labs have already been trained on most of the text the public internet has to offer. Research suggests that the supply of high-quality data is nearly exhausted, which is why behavioural data, captured live from real workers, has become the new target. As Meta’s spokesperson put it, models trying to "complete everyday tasks using computers" need real examples of how people actually use them.

Meta is not alone. OpenAI has reportedly asked contractors, via training-data partner Handshake AI, to upload real workplace files from past and current jobs, including Word documents, PowerPoint decks, Excel spreadsheets and code repositories. Scale AI, in which Meta took a 49% stake in June 2025 for $14.3 billion, is blunter still.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Saturday with Cormac Ó Headhra, how worried should we be about AI replacing jobs?

Through its Outlier platform, the company has assembled more than 700,000 contributors with Masters and PhDs across medicine, law, software engineering, mathematics and the sciences. Its medicine projects, for example, pay up to $120 an hour for clinicians to write diagnostic questions and rank model outputs against their own clinical reasoning, and vendors in the same market reportedly pay medical fellows up to $450 an hour, lawyers up to $130 an hour, and senior executives over $500 an hour for the equivalent in their fields.

Not just a Silicon Valley problem

This is not just a Silicon Valley story. Algorithmic monitoring is already routine for warehouse, delivery and platform workers in Ireland, where research I have done with colleagues shows how performance data is used to ratchet up control over the pace, route and conduct of work. Regulators have started to push back. France's data protection authority fined Amazon €32 million for an "excessively intrusive" monitoring system, after parallel complaints from unions in Ireland, Germany, Austria and Spain.

The relevant point for Irish office workers is that the same vendors who built those tools for warehouses are now selling them into financial services, healthcare and the public sector. The same tools are now arriving in offices that have likely never thought of themselves as surveillance environments.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Drivetime, Taylor Swift protects her voice from AI

How can employees push back?

Resistance is harder than it sounds. The technology is moving faster than regulation, the power imbalance favours the employer, and most workers cannot afford to be the test case. What tends to work is the coordinated use of rights that already exist.

Asking the right questions is a useful first step. In Ireland, GDPR gives workers the right to know what data is being collected about them, why, and for how long, and the Data Protection Commission's employer guidance confirms that the DPC will ask to see the relevant Data Protection Impact Assessment if a complaint is later made. A Subject Access Request, made through HR or directly to the data protection officer, forces an employer to disclose what is being collected and on what legal basis. Where it gets refused or fudged, that refusal itself becomes evidence.

Collective channels matter more. Where there is a union, surveillance practices can be brought into consultation procedures. Where there is none, workers can ask staff representatives to seek a written monitoring policy and a copy of the Data Protection Impact Assessment. Regulation is catching up. From August this year, the EU AI Act treats workplace performance monitoring as a form of "high-risk" AI, requiring employers to run risk assessments, test for bias, maintain human oversight, and notify workers before deploying such systems.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Morning Ireland, AI adoption among Irish firms likely to lead to job losses - ESRI

Workers will have the right to request an explanation of any decision a high-risk system contributes to. Penalties reach 3% of global annual turnover. A tool of the kind Meta is rolling out in the US, if deployed against workers in Ireland, would sit squarely inside that high-risk category. Whether any of this gets enforced depends on workers being willing to test it, in cases where the cost of testing it falls almost entirely on them.

But the most resilient forms of pushback are also the least formal. Research on platform workers, who have lived under algorithmic monitoring longer than office workers ever have, shows that resistance survives even when unions are blocked and formal channels are closed. It moves into the spaces the employer cannot see, into quiet networks where workers share what they have figured out, and into the moments when one of them decides to make the conditions visible to the outside world. The Meta story is a rare glimpse of that resistance breaking the surface as most of it never makes the news.

The more lasting change is not the surveillance, which has been with us for over a century, but what the watching is now for. Office workers in 2026 are producing the training data for the systems that will, on the explicit logic of their employers, eventually do the work without them. That is a different relationship between worker and workplace than the one most people thought they signed up for.

Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates


The views expressed here are those of the author and do not represent or reflect the views of RTÉ