Why most AI products fail before the first user interaction

Most AI features fail because they start with hype, not humans.

Blurred office scene with a humanoid robot seated across from a man at a desk. Overlaid headline reads, “Why Most AI Products Fail Before the First User Interaction,” with a visible “Save This” button.
Image Credit: AI Generated Image

Most AI products fail before the first user interaction because they don’t solve a real user problem. That may sound dramatic, but I keep hearing the same sentence in executive rooms: “We need an AI feature. Our competitor just launched one.” And just like that, features are built out of fear of being left behind rather than from a clear understanding of what users actually need.

And this isn’t just my opinion. Recently, I came across a report titled The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed by RAND.

The researchers interviewed 65 experienced data scientists and engineers who have spent years building AI and machine learning systems. Their goal was simple: understand why so many AI initiatives fail and what differentiates the few that succeed.

One of their core conclusions is striking. The most successful AI projects focus relentlessly on the problem they are meant to solve, not on the technology used to solve it. That sentence could have come straight out of an UX strategy deck.

Designers have been trained for decades to understand users and their problems before building solutions. What if AI projects are struggling not because of the models, but because design thinking was missing from the room?

Building AI products for the mere sake of tech

I see many companies building and releasing AI products because they feel they have to (often because their competitor just launched a tool with “AI” in the name), rather than because they’ve identified a real user need.

This appears to be a major trend, so it doesn’t surprise me to read what, for some, may be shocking statistics: Gartner predicts that more than 40% of agentic AI projects will be canceled by the end of 2027. The reasons for failure cited are escalating costs, unclear business value, and inadequate risk controls. Their own analyst called it out directly: most of these projects are early-stage experiments driven by hype and often misapplied.

Gartner also estimates that only about 130 of the thousands of agentic AI vendors out there offer genuine capabilities. The rest are engaging in what they call “agent washing,” just rebranding chatbots and RPA tools as something they’re not.

When everything is AI-powered, nothing is AI-powered. The label becomes meaningless. Companies end up with a product that technically “has AI” but solves no real problem for anyone.

Blurred office background with bold text stating, “40% of agentic AI products will be canceled by the end of 2027.” Source noted as Gartner 2025.

The “not having the right people” problem

The data scientists and engineers who were interviewed for the RAND report (cited in the article’s introduction) say it’s challenging to find the right talent for AI projects.

Hiring skilled specialists is an issue, but also how organizations value the talent they already have. Data engineers, the people doing the hard work of cleaning and structuring data so models can learn from it, are treated like second-class citizens. One interviewee literally called them “the plumbers of data science.”

So they leave. And when they leave, they take all their institutional knowledge with them. No one knows which datasets are reliable anymore; projects stall, and leadership loses interest.

What about designers? Most AI teams don’t even have one. Or they bring one in at the tail end, when the model is already built, and someone realizes the interface is unusable. Designers and developers depend on each other more than ever and should be at the table from day one when building any product, especially those with AI features.

To support my views and opinions on the topic, I cited a 2025 field experiment at Harvard Business School that found that AI-augmented cross-functional teams were three times more likely to generate high-performing ideas than individuals working alone. The designer-developer divide is precisely the kind of silo that the experiment suggests we should break down. And so is the data engineer vs. data scientist one. Different roles, same problem: the people getting sidelined are often the ones who could have “saved” the project, or, at least, brought the most clarity to it.

What we can do as design leaders

The RAND report focuses on AI/ML model development rather than product design. But the failures they describe, miscommunication, unclear goals, and users as an afterthought, are the same ones killing AI products today. There’s a golden opportunity for design leaders to step into that gap.

Here’s how should move forward:

Be THE communicators

Industry stakeholders often misunderstand or miscommunicate what problem needs to be solved using AI. Design leaders have a unique opportunity to be the missing link that bridges communication between key stakeholders and software engineers, data scientists, and other AI technologists. We’ve been closely working with them for years building products, so we are the ones who know better how to communicate the right specs. We speak the same language.

Follow the one-year rule

RAND’s report says that before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year. If an AI project is not worthy of such a long-term commitment, it most likely is not worth committing to at all, especially because an AI project with an overly accelerated timeline is likely to fail without ever achieving its intended goal. Designers are essential for creating products people want to use, so this rule extends to our teams. We need that time for research, testing, iterating, and validating. You can’t design a meaningful AI experience in a two-week sprint.

Put the problem statement on the wall, not the tech stack

The AI products that succeed are those focused on the problem they are meant to solve. As design leaders, we should be the ones writing the problem statement that guides the entire project. What I mean by this is a clear, human-centered problem statement that everyone, from the CTO to the junior data engineer, can rally around.

Own the workflow mapping

One of the most common failure patterns mentioned in the RAND report is building an AI model that doesn’t fit into the business workflow. It works in isolation but breaks when it meets reality. That’s design territory, because we are trained to map user journeys, service blueprints, and task flows. We see where an AI feature fits into someone’s day, and where it doesn’t. If we’re mapping these workflows before the engineering team starts building, we save the entire project from that painful moment when a technically sound model turns out to be operationally useless.

Ask, “Do we need AI for this?”

Not every problem requires AI. As design leaders, we should have the confidence to ask the uncomfortable questions. Sometimes the best design decision is the simplest one, and saying “this doesn’t need AI, it needs a better interface” might be the most valuable contribution you make to the entire project.

A group of designers and developers gathered around a whiteboard labeled “Problem Statement.” One man presents while pointing to sticky notes and diagrams, as others sit at computer workstations listening and collaborating.
Image Credit: AI Generated Image

The missing piece to AI product success

Designers and engineers focused on solving real user problems. That is how we build AI products that genuinely improve people’s lives.

The gap between what companies say they want AI to achieve and what users actually need is a design leadership issue. To achieve real success, we must reframe the conversation from shiny new features to solving real user problems.

Arin Bhowmick (@arinbhowmick) is Chief Design Officer at SAP, based in San Francisco, California. The above article is personal and does not necessarily represent SAP’s positions, strategies or opinions.


Why most AI products fail before the first user interaction was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More