skip to main content

Why do 95% of AI initiatives fail?

Business colleagues meeting in modern conference room
AI failure often conjures images of organisations putting all their eggs in the basket, but failure can take many forms. Photo: Getty Images

Analysis: Many organisations dip their toes into AI by trialling new tools without capitalising on the benefits or transferable lessons

By Andrew Brosnan, UCC and Prakriti Dasgupta, Maynooth University

Generative artificial intelligence (GenAI) has been one of the key technology trends of the 2020s, impacting both personal and professional arenas of life. With 88% of organisations implementing AI, and an anticipated $3 trillion expected to be spent globally on AI initiatives by 2029, many organisations have expressed a significant fear of missing out on AI capabilities. Some, more than likely, are succumbing to the bandwagon effect, adopting AI in response to industry hype and competitive pressures rather than out of a genuine operational necessity.

But it may be surprising to learn that the technology has not been delivering the value they expected yet. A recent MIT study found that 95% of AI initiatives failed, having no positive or negative impact on their organisation after deployment. So why is this the case?

While the factors that influence success or failure can vary from organisation to organisation, our research suggests that many are caught in the 'perpetual pilot' trap, in which organisations dip their toes into AI by trialling new tools or use cases without capitalising on the benefits or transferable lessons.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's The Business, is the AI bubble about to pop? With economist Ann Pettifor

As a result, organisations often invest considerable time, money, and initial enthusiasm into developing small‑scale AI projects that have little chance of progressing into the organisation‑wide transformational benefits they expect.

The 'perpetual pilot' trap

AI failure often conjures images of organisations putting all their eggs in the AI basket through laying off their workforce to calamitous results. But AI failure can take many forms, even among traditionally cautious organisations.

Take the case of a small Irish public sector organisation that recently started their first AI project in mid-2024. A team responsible for reviewing legislation believed AI could dramatically speed up their work and improve accuracy. Leadership shared the curiosity, but also the apprehension of taking on their first AI initiative. Keen to minimise risk, a modest budget of €150,000 was allocated and four full‑time staff were assigned to partner with an external IT services firm.

The goal was to deliver a tightly scoped proof‑of‑concept using a small dataset of relevant legislation. Twelve weeks later, the prototype went live. Initial tests showed that when used correctly, the tool provided generally accurate outputs and delivered marginal efficiency improvements for the team. Encouraged but still cautious, leadership opted not to scale the tool further. Instead, they asked the team to continue testing and promised to "revisit" the project's broader potential at a later stage.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ One's Brendan O'Connor, Elaine Burke from the For Tech's Sake podcast on how AI will infiltrate all spheres of life in 2026.

Meanwhile, a separate unit within the organisation spotted an opportunity of its own. They proposed building an AI-powered chatbot to help field routine citizen queries through the organisation’s website. Seeing what they interpreted as the "success" of the first pilot, leadership approved another €150k for a second AI proof‑of‑concept. Again, the development cycle ran for twelve weeks. And again, the pilot delivered a limited but generally positive outcome.

The chatbot was welcomed by contact‑centre staff, though at this early stage it could answer only basic, pre-scripted questions. Leadership, reassured by the smooth delivery, signalled that further enhancements could be explored in the future. Individually, both pilots appeared to represent a sensible, low‑risk experimental approach. Collectively, however, the picture told a different story.

The budget and time committed were now equivalent to what many organisations would spend on a more ambitious AI solution from the outset

Over the course of six months, the organisation had invested €300,000 and the full‑time effort of several employees into building two standalone AI tools, each offering benefits to only a narrow slice of their workforce. The budget and time committed were now equivalent to what many organisations would spend on a more ambitious, enterprise‑wide AI solution from the outset.

What was framed as a risk‑averse strategy ended up producing its own hidden risks: duplication of effort, fragmented tools, and costs that leadership had not anticipated. By running small pilots in isolation, the organisation missed out on the chance to develop a cohesive AI strategy that could scale, integrate across departments, and deliver broader organisational transformation.

The AI maturity journey

Graph showing AI maturity from 1 to 4

To fully understand this 'perpetual pilot' trap, we must first understand the multi-phased journey that organisations take as they integrate AI within their structure and operations:

Phase 1: Employees test out using free GenAI tools like ChatGPT or Grok to help them with tasks. This is usually done informally and not always known to leadership.

Phase 2: The organisation spots an area of the business which may benefit from AI. They quickly undertake an AI pilot, generally with a small team with a very narrow goal. Most organisations never leave this phase.

Phase 3: The AI pilot is scaled. It is made available to more employees and supports more processes. Employees are also trained to use AI responsibly and supported in adopting it.

Phase 4: The organisation is transformed and uses AI to drive customer interaction and employee tasks. They continue to develop AI capabilities and adopt them where appropriate. While organisations should ideally strive to reach phases 3 or 4, the reality for the majority is being stuck in a phase 2.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ News, as 2025 comes to a close, Behind the Story takes a look back at some of the biggest stories around AI

Good practice

So how do you avoid getting stuck in costly pilots and what should organisations do before joining the AI race?

  • Optimise first

Failed AI deployments do not happen because of model quality, but because of bottlenecks in processes, fragmented data, and outdated systems. Focus organisational investment into addressing these weaknesses as AI does not offer a 'silver bullet'.

This would not mean that you are distracting your organisation from tapping into the gains that AI has to offer but actually fixing foundational layersthat would allow AI to work best.

  • Know when to end the pilot

Pilots are very valuable in showing what works well and what could be improved when introducing AI. The MIT study shows that this is only valuable when applied in real scenarios. Organisations should define beforehand when they should end a pilot and start scaling, rather than keeping several small projects running perpetually, costing time and manpower.

  • Train your people, not just the model

Organisations that overinvest in AI tools while underinvesting in training, process redesign, and change management end up with low adoption and growing frustration when their enterprise programmes stall.

Spreading investment into improving human capability ensures that even when specific AI tools fail, the organisation itself becomes more flexible, resilient, and capable of absorbing future technologies.

  • Try not place "all eggs in one basket" or lose sight of the fundamentals

Another important thing to bear in mind is that, in most industries, customer behaviour, regulation, and competitive dynamics have not fundamentally changed yet. That means organisations that divert too much attention and funding away from product quality, service reliability, compliance, and trust may need to adopt a measured approach. AI can enhance these fundamentals, but it cannot replace them.

When hype cycles cool and organisational budgets tighten, it is these core strengths that protect organisational revenue and brand reputation.

Follow RTÉ Brainstorm on WhatsApp and Instagram for more stories and updates


Andrew Brosnan is a PhD student in technology risk in the College of Business and Law at UCC. Dr Prakriti Dasgupta is an Assistant Professor of Human Resource Management in the School of Business at Maynooth University


The views expressed here are those of the author and do not represent or reflect the views of RTÉ