Between 70-85% of GenAI deployment efforts are failing to meet their desired ROI

Are your own employees sabotaging these efforts?

If you've got a subscription to a business journal or have perused LinkedIn in the last year, you've likely seen articles citing these staggering statistics: somewhere between 70-85% of current AI initiatives fail to meet their expected outcomes. In 2019, MIT cited that 70% of AI efforts saw little to no impact after deployment. Since then, the figure has been expected to increase, with some predicting as high as 85% of AI projects missing expectations—a figure much greater than the 25-50% regular IT project failure rate. We know that when done correctly, deploying AI has an incredible rate of return on productivity and usefulness, providing a significant source of value to organizations. So where is the disconnect?

One AI expert with whom I spoke informed me that retail off-the-shelf AI programs tend to have lower adoption rates and efficiency gains than custom-built enterprise AI tools. Additional reasons for AI project failure are poor data hygiene and governance, lack of proper AI operations, inappropriate internal infrastructure, and failure to choose the right product or proof of concept.

But as a human capital strategist and researcher of AI adoption, I find myself wondering why more people aren't talking about… well, the people. Isn't it true that the definition of adoption is getting people to work in a different way? So then why aren't more specialists talking about this obvious missing link to getting higher adoption figures for retail Generative AI tools?
In this article, we dive deep into some of the human-based reasons why your organization may not be seeing the outcomes you expect around using Generative AI in the workplace.
Here are the six most common people-based reasons for lack of adoption of Generative AI initiatives.

1.People don't quite trust AI

Employees often struggle to trust AI in the workplace due to concerns about reliability, transparency, and fairness. Many are wary of AI systems because these technologies can sometimes produce unpredictable or biased outcomes, leading to a lack of confidence in their decision-making processes. In 2021, a study found that 37% of respondents said they were more concerned than excited about AI. That number jumped to 52% in 2023. (The percentage of those who are excited about AI declined from 18% in 2021 to just 10% in 2023).

In adoption terms, trust is important because if employees don't trust a concept or a tool, not only will they fail to embrace it—they'll actively work against it. Trust around using AI is multi-variate; employees need to trust their organization to do the right thing, trust the teammates teaching and governing the AI models to act accordingly, and trust the output of the models (and that they will not drift over time).

Well-established AI ethics and governance are crucial for fostering trust in AI within an organization because they provide a structured framework for ensuring that AI systems are designed, implemented, and managed in a fair, transparent and accountable manner. Clear ethical guidelines and governance policies address key concerns such as bias, privacy, and data security—reassuring employees that AI decisions are made with integrity and respect for individual rights.

2.People fear the unknown future with AI

When HBO's "Westworld" debuted in 2016, it highlighted a futuristic world where artificial intelligence blurred the lines between humans and technology. In recent news, Arnold Schwarzenegger was recently quoted saying that "The Terminator" is no longer a fantasy given the current state of artificial intelligence. To many sci-fi fans, these stories feel less like the fiction they were always meant to be… and more like a prophecy.

Sentience, free will, and ethical predicaments are now front and center in real-world AI discussions. From breakthroughs in machine learning to intense debates over AI rights, "Westworld" is a sneak peek into a future that feels closer to the present than ever. To many employees, this possibility feels real enough that they don't want to touch anything labeled as AI to slow down what feels like rapidly advancing progress.

Digital literacy training focused on what AI can and can't do can help mitigate the fear of the unknown and turn it into trust for how your organization is going to manage AI.

3.They're worried for their jobs

Many employees worry that AI technologies could render their skills obsolete. An Aberdeen study found that 70% of Boomers, 63% of Generation X, and 57% of Millennials and Generation Z agree with the belief that "AI will put jobs at risk." This anxiety creates resistance to AI integration, as workers prioritize job security over the potential benefits of technological advancement.

This fear is often compounded by a lack of adequate support systems and retraining programs—tools an organization can deploy to help alleviate this fear. When businesses invest in AI, they may not concurrently invest in reskilling initiatives for their employees. Without clear pathways for career transition or skill enhancement, workers may view AI as a threat rather than an opportunity. The absence of such support can lead to a reluctance to embrace new technologies, as employees are uncertain about their future in an AI-driven workplace.

Finally, the broader economic implications of AI adoption also play a role. The transition to AI-driven processes can lead to increased economic disparities if it disproportionately benefits organizations and individuals who are already well-positioned to capitalize on these technologies. This disparity can exacerbate feelings of insecurity among workers who fear being left behind in an evolving job market.

4.They're just tired of change

The current pace of change in the workplace is unprecedented, driven by rapid advancements in technology, evolving business models, and shifting market demands. In 2022, the average employee experienced 10 planned enterprise changes—up from just two in 2016, according to Gartner research. Combine this with research from IBM that states that AI adoption continues to quicken, with more than half (53%) of IT professionals saying they have accelerated their rollout of AI over the last 24 months. This accelerated pace of change is placing considerable stress on employees, leading to high rates of change fatigue. A recent Forbes article cited that 45% of workers felt burned out by frequent organizational changes.

Change fatigue is further exacerbated by the frequency and scale of organizational shifts. Prosci's 12th Edition Best Practices in Change Management research cites that 75% of organizations report they are either nearing, at, or past the change saturation point. This relentless pace contributes to diminished morale, reduced productivity, and increased turnover. As employees grapple with adapting to new tools and processes, the cumulative impact of continuous change can erode their overall job satisfaction and engagement, underscoring the need for more strategic and supportive approaches to managing organizational transformations

5.You've approached the Uncanny Valley

The "Uncanny Valley" is a psychological concept in AI development that describes the discomfort or eeriness people experience when encountering digital entities that appear almost human but fall short of achieving complete lifelike realism. This phenomenon occurs when the appearance or behavior of an AI is close to human-like but still noticeably artificial, creating a sense of unease or repulsion. This aversion arises because the near-human likeness triggers a cognitive dissonance, where the almost-human features evoke expectations of authenticity that are not fully met, resulting in a disquieting experience for viewers. The Uncanny Valley effect can undermine user trust and acceptance of AI, making it a significant challenge in the design of humanoid robots and virtual avatars.

6.Your organization has a high population of employees in older generations

It's true that there's a positive correlation between age and lack of desire to adopt AI—but it's not necessarily the fault of AI. In fact, as people age, they often lose motivation to learn new things altogether. "As we age, it's harder to have a get-up-and-go attitude toward things," says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research. "This get-up-and-go, or engagement, is important for our social well-being and for learning — it's tough to learn if you aren't attending and engaged."

Because such a large swath of our workforce are Generation X and Baby Boomers, this means that those rolling out AI to workforces must find a way to get them interested enough to want to learn more. This requires a combination of focus on benefits on AI with digital literacy training in multiple modalities, meeting learners where they are with their desired method of learning.

If your organization has experienced any of these hurdles to AI adoption (or you're not sure where to start) - NTT DATA can help. Our People & Organization Consulting team is well-versed in overcoming even the hardest of adoption challenges, and our Data Analytics and AI practice can build your organization a solution tailored to your exact needs. No matter what your need, here at NTT DATA, we help you build a holistic view of organization-specific engagement opportunities, identify gaps using data-led techniques, leading to increased adoption and a continuous-improvement mindset. Learn more about our Global Delivery Assurance capabilities and explore how we can assist you in achieving your goals. Reach out today to start a conversation.

Christen Bell

Senior Manager

Christen is a Senior Manager in our People & Organization consulting practice and has a passion for the humanization of automation. For the past decade, she has researched the adoption of intelligent systems and frequently shares her insights through public speaking engagements and guest lectures at universities.