From design to direction: Bridging product design and AI thinking
The shift in product design with the advent of AI and a potential generative experiential future
This short essay explores how core concepts in AI can reframe how product designers think about feedback, intent, and the future of our role.
A lot has been written about the evolution of user experience since before I ever sat in a Barnes & Noble for hours, trying to understand what the letters “H, C, and I” even meant. In the twelve years since that moment, the tools we use have matured, the rules for interaction have solidified, and the role of design has expanded. We have become a bridge connecting users, businesses, and the technologies that serve them.
Now, with artificial intelligence entering the public mainstream, a new question emerges: where do we go from here?

I am not here to push anyone toward vibe coding, Figma Make, or any particular path. Instead, I want to share an idea that came to me while learning the basics of how large language models work. It is a way to start connecting core concepts across domains and help product designers bridge toward the next era of technology. As we move beyond conversational interfaces, we will need new ways of thinking about how to interact with this emerging intelligence.

At the top, I want to recognize that, for the purposes of this article, the term product design can mean many things. Even Figma’s own definition lists responsibilities that extend far beyond “pushing pixels.” Product designers excel when they have a deep understanding of how technology works and how the business measures success. Whether supporting key metrics or shaping strategy, our work brings together different domains.
To do that well, we already think in systems, frameworks, research, and telemetry to influence metrics and drive better outcomes. If we take that same mindset and apply it to concepts from AI such as training corpus, loss, gradients, and intent, we can start to see a future where design is not just about arranging interfaces but about understanding how systems learn. That is the bridge I want to explore.
What AI teaches us about feedback and learning
In almost every conversation about technology today, one word keeps surfacing: data. It shapes the experiences we design, the decisions businesses make, and the intelligence that powers modern systems. For this section, I want to focus on how data in AI can teach us something about our own design process, specifically how we learn and iterate.

In user experience, we often work with two types of data: qualitative, which describes what people say, and quantitative, which describes what they do or how often they do it. Let’s focus on the quantitative side for a moment. Imagine plotting the results of a usability study on a simple x and y axis, showing effort versus success rate. Now imagine doing something similar with the data an AI model learns from, what’s called its training corpus (FastCompany’s article “What is a ‘corpus’”).

Of course, an LLM’s data isn’t really this simple, but as a metaphor it helps. Once both sets of data are visualized, we can imagine drawing a line that represents our average user response on one chart and the AI’s predicted understanding on the other. The distance between those lines represents something that exists in both worlds: the gap between what we expect and what we observe.

In design, we call that friction. In AI, it’s called loss. Both terms describe the cost of being wrong. When we iterate on a flow or redesign a feature to remove friction, we are, in a sense, minimizing loss. Each round of testing gives us more data to adjust our model of the user’s mental model, just as machine learning systems adjust their internal weights to better predict outcomes (IBM: What is Loss Function).

Consider an onboarding flow where users repeatedly abandon a form at the “Company Name” field. Once identified, we might re-label the field or make it optional, reducing friction and improving completion rates. That adjustment mirrors how an AI system corrects its parameters after identifying an error.

This is where the concept of gradients comes in. In machine learning, a gradient measures the direction and magnitude of change needed to reduce loss. Think of it like rolling a ball down a landscape toward the lowest point (IBM: What is Gradient Descent?). The slope tells the model how to adjust its parameters to improve performance. In product design, we do something similar every time we interpret usability data or customer feedback to decide what to change next. Our gradient is directional insight, the sense of where to move to make the experience smoother.

The iterative process that defines good design is, in many ways, a form of gradient descent. We identify where users struggle, adjust our assumptions, and measure again. Over time, our understanding converges toward what users need. It is not perfect, but neither are machine learning models; both are approximations refined through feedback.
If there’s one takeaway from this parallel, it’s that feedback, whether from people or from data, is not an afterthought. It’s the learning mechanism that keeps systems aligned with reality. And just as models can overfit to their training data, teams can overfit to internal assumptions or stakeholder preferences. In both cases, the cure is the same: structured, ongoing feedback loops that balance exploration with correction.
Design as a system of optimization
Now that we understand the shared language of feedback and learning between design and AI, it helps to zoom out and look at how these loops exist across every layer of our work. Most senior product designers already know that the interfaces we craft are only one piece of a much larger system. As noted in Nielsen Norman’s levels of UX maturity, mature groups and practitioners move beyond pixels to build frameworks, strategic models, and alignment structures. Even at these higher levels, tight loops of feedback and iteration guide what we do.
If learning and iteration exist at every level, our role is to understand how those levels connect. Each one has its own kind of loss to minimize and its own signals to optimize. We can roughly think of three layers at play:
- Designers optimizing interfaces: reducing friction and improving micro-interactions (micro Jobs to Be Done).
- Interfaces optimizing user journeys: helping people complete the key Jobs to Be Done (JTBD) that deliver product value.
- Organizations optimizing outcomes: aligning those journeys to business metrics such as adoption, retention, and revenue.

At the first level, designers focus on immediate signals like click rates or error events to reduce friction and guide users toward small but meaningful completions. At the next level, we expand our view to higher-order goals, such as “users complete their profile within thirty days.” These are composite outcomes that indicate whether our product fulfills its promises. Finally, at the organizational level, we zoom out again to see how entire journeys and feature sets contribute to larger KPIs and OKRs. Each level informs the others, forming a continuous loop of optimization from micro-interaction to business strategy.

As product designers, our work is shifting away from static interfaces and toward orchestrating these systems of intent which I mentally still hold in the same place as a JTBD. With the looming advent of more generative experiences, our influence lies less in deciding where each pixel goes and more in defining the direction a system should learn toward. Our task becomes identifying the goals for each layer, ensuring the feedback flowing through them is meaningful, and guiding the system to reduce its loss at every scale.

Doing that requires data and this is where many organizations struggle. In my experience at both small and large enterprise companies, few have mature instrumentation strategies that connect in-product signals to their KPIs or OKRs. The result is a blind spot: teams can’t see how user intent translates to business outcomes, especially in environments increasingly shaped by AI. When instrumentation is thoughtfully designed, those signals become the gradients we follow. They allow us to see where experiences fall short, measure how far off the mark they are, and iteratively align the product, the user, and the business.
Directionally similar to other schools of thought, such as from Claudia Canales’ Beyond the Model: a systems approach to AI product design, positioning design as a system, I see it as one of optimization and not creating perfect screens. It’s about cultivating feedback systems that learn alongside users. When product designers learn to interpret intent, orchestrate signals, and balance the goals of each layer, we build the foundation for truly adaptive, AI-infused experiences.
From control to guidance
As product designers, many of us are used to controlling every element of the experience. But as generative systems begin composing parts of the interface for us, our value shifts from crafting the final form to defining the conditions that shape it. We move from being the maker to being the guide — designing how intent, data, and feedback interact to keep the system aligned with human goals.
Imagine opening your design tool and seeing a generative prototype already built around inferred intent. You don’t redraw components; you evaluate signals: Did the model understand the job correctly? Did the feedback loop close? That shift, from crafting to curating, is one direction our craft might be headed. Our influence doesn’t disappear; it evolves.
When we see design as a system of optimization, we stop treating AI as a separate discipline, or the latest tool to use, and start seeing it as a mirror. It reflects how we already work: learning, adjusting, and seeking balance between human intent and technical possibility. The next evolution of design isn’t about replacing what we do, but about scaling how we learn.
In the next article of this series, I’ll explore how instrumentation and Jobs to Be Done can serve as the connective tissue between intent and measurement — the practical levers we can use to steer generative systems toward meaningful outcomes.
From design to direction: Bridging product design and AI thinking was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
This post first appeared on Read More

