Design careers in the Age of AI: specialize or generalize?
How LLMs are reopening this question — and why it could turn into a professional identity crisis.
Text written manually without AI. Images made with Midjourney.
Maybe this text will become outdated as quickly as all the changes we’ve seen happening in our field over the last few years. But even if I run that risk, I feel I need to write a bit about a feeling I’ve seen spreading more and more among designers who work with digital products: the crisis of professional identity.
It’s no longer recent that Artificial Intelligence has been popularized with the rise of LLMs accessible to all audiences on a global scale. Even if it started with large models focused on text, in a short time we saw the exponential growth of countless tools that create a bit of everything.
When this started to become increasingly evident in our field, I began to notice that people started to understand again that a return to the generalist designer was happening in the digital product space. NN Group even published an article talking about the return of the UX Generalist as a possible — and also desirable — role due to AI’s advance in delivering so many different parts of our process.
The Return of the UX Generalist
At first, I interpreted that as positive — even while aware of the ethical and social implications of AI (which won’t be the subject of this text). However, looking more closely at the current landscape, what I’ve been noticing is that:
the return to generality, with the help of the LLM models available now, doesn’t necessarily have a positive effect and can generate superficiality and a reduction in innovation in the long run.
In this space, I’ll explain more about this feeling — especially for those who have been feeling anxiety about how AI may affect their professional and personal identity in today’s world.
This text is not against the use of AI: it’s simply an article with reflections on this current moment of interaction between designers and LLMs.
Hyper-specialization as a pain
In my master’s, one of the authors who helped me most in thinking about the way I liked to work was Edgar Morin. He is a thinker who talks about how all modern knowledge (especially since the Renaissance revolution of scientific knowledge) has been guided by simplification through reductionist thinking. This reductionism pushes the individual from a broad basic education into higher education restricted to a single discipline, and then into progressively narrower specializations (until it’s expected they dedicate themselves to a single topic for the rest of their life).
I’ve always been averse to this kind of training, and I felt firsthand the effects of not following the expected path. It was only after many years that I realized doing that was very positive — especially seeing, in practice, this disjunction that Edgar Morin discusses so much in relation to twentieth-century education.
In these almost three decades that have passed in the twenty-first century, it’s possible to see that the field of digital product design specialized so much that it ended up opening internal fissures. I tend to see this as, at least, three major axes in which people specialize and navigate:
- a communicational axis, linked to branding, language, and art direction, where the artistic side of the interface matters a lot;
- an empirical-scientific axis, linked to research, methods, data, validation, and decision-making — sometimes captured by ROI and return discourse, but not reducible to that; and
- an exact and technical axis, tied to software engineering, where part of design remained close to front-end, components, systems, and implementation, as was already seen more strongly around 2010.
Of course, this happened because of the size of the complexity of each major area — with only the digital product part relating to communication between humans and machines. But even if there’s a justification, what often happens is a production line between closed niches that don’t talk much to each other.
Hyper-specialization leads to less communication, since each part understands only its own area and its own topic in depth, but can’t comprehend the issue that lives next door.
And with that, information passes from area to area until it reaches the professionals at the end of the process: developers. I bet many of you have already witnessed this phenomenon, which creates noise around impediments or disagreements.
That’s why Morin talks so much about an education of the future in which complexity is embraced — in teaching that doesn’t funnel down but opens horizons — while maintaining depth of study. However, even if the Complexity Paradigm exists in the educational field (and I believe in it and defend it whenever I can), it’s important to be cautious!
The effects of having professionals who can embrace so many different areas have always been desired by many sectors of the market. That’s why the idea (even if often imaginary) of the “unicorn designer” has always been so promising — a caricature name given to the imagination of a “do-it-all”: someone who, alone, produces a super artistic and interactive interface with decisions grounded scientifically and empirically, and can still make the solution tangible in front-end.
The “but” in this always-desired idea is that it isn’t humanly possible to make a single professional transmogrify into roles that demand such different cognitive training — especially if you want a product that is truly good in each aspect.
But this collective imaginary never stopped existing, and it has now been reborn with the emergence and popularization of Artificial Intelligence. By relegating activities to the machine — even without specialized knowledge in a given area — it would be possible to stimulate the existence of this professional who navigates every corner and can deliver well in any activity.
It’s at this specific point that a promise is born — but also countless risks that are not yet fully predictable.
Generalization as a risk
I’ve been reflecting a lot on this issue. And more and more, I feel the problem isn’t the promise of the return of the generalist in itself: the problem is the way this may emerge within the scenario we have.
Look: unlike a reformed education in which people don’t isolate themselves into niches, what can happen is:
a professional who keeps the same aptitudes as before, but starts delegating to the machine responsibilities they don’t really know how to develop.
As the NN/G article approaches this issue, the professional can get an AI boost — that is, manage to produce outputs in areas they didn’t master before. The output is produced, but the user’s cognition may remain the same as before.
Okay, but what’s the problem with that?
The issue here is that we’re operating primarily with Large Language Models, created specifically by companies that operate on a global level. These models learn human language (whether textual or imagistic) from enormous amounts of data to generate, through statistics and probability, responses that are coherent to us, humans. In more technical words, they function as token “prediction” machines: given an input text (prompt), the model estimates what the most likely next token will be and uses that to build the output. I know each AI will have a more specific functioning (with image and video models), but in general, they all operate with statistics from the training specific to the company that created them.
That’s why AI today is not an intelligence in the human sense of the word: it’s a super complex machine that can learn countless patterns across all areas for which data is digitally available. That’s why it (at least not yet) doesn’t have what some have been calling neuro-symbolic reasoning.
A didactic example given by Massimo Attoresi is about reasoning in medicine: if a patient shows up with a rare symptom that has no report in previous studies, medical knowledge can be mobilized by a human to understand how to unravel that symptom. An LLM can’t do that: it can hallucinate and produce plausible answers or images, because its training prioritizes generating believable responses, even in the absence of sufficient knowledge to solve certain problems.
For most users, it’s more frustrating to receive an “I don’t know,” which is a questionable UX decision of these platforms. So when a designer who had a specialty starts producing any other specialty with LLMs, they will begin developing that only with the support of that machine that brings the probabilistic result most desired by the global average. And, even more importantly, that person will understand that their work is the output itself!
On the contrary, a professional identity is not defined only by what is delivered at the end, but by the cognitive and symbolic construction that allows one to cross complex processes — including those still unknown to the models or to humanity itself.
I’ve said so much here, and I wanted to add a parenthesis: yes, it’s possible to do a lot of design using LLMs today — in all instances of our work, including those we wouldn’t be able to do without AI due to training limitations. Now, it’s necessary to understand that when we train an agent to make the machine operate like a specialist in some part of the UX/UI process, we’re always generating probabilistic answers without a neuro-symbolic background to face new, unmapped situations.
And it’s exactly at this point that the risk lives. It’s a risk of homogenizing solutions, writing styles, innovation processes, visuals — and even code, in the case of massive reliance on LLMs. Unless the support comes from the professional identity path.
The professional path as identity
Think with me through an example:
We have a designer who works with digital products and who has always, since very young, fit into a more communicational axis.
She spent years going deep to create highly visual platforms with unique animations that create an impact that many brands want.
She also grew by feeding on aesthetic sensitivity, managing to blend branding with product design very well in digital platforms.
In other words, everything she does visually is guided by the feeling built throughout that professional trajectory. And that feeling is based on concepts and methods studied for years, which allow her to get through very different problems without impediment.
That person today may look at the current scenario and begin to feel anxiety and an identity crisis: “Maybe my work won’t be valued anymore in some time because of AI.” With that, she might start using different LLMs, trying to deliver in research areas (both qualitative and quantitative) and also to produce Design Systems, creating already-coded components.
What will probably happen in the long run if this person doesn’t start studying those fields instead of only creating outputs with AI? We lose a talent in one area to create a generalist who produces a bit of everything — but only through the LLM. And because it’s only with the help of that kind of tool, her deliveries will tend to fit the globally expected statistical average.
Now, what happens if that same person starts using AI to deepen her work, but also to experiment with adjacent areas in order to learn how they work? It’s at this point that I think we gain more from the use of LLMs today.
Beyond automating manual and exhausting tasks, the AI boost can be applied to our training, enabling dialogues and cooperation with peers that used to be very difficult. And, returning to our dear Edgar Morin, collective thinking becomes more complex because, even though each person maintains their specialization, everyone can understand or do the basics of their teammates’ work — even if with less depth.
I think this is a very subtle difference, but fundamental for understanding the difference between production and innovation in the twenty-first century.
Want to talk more about this?
Send me a message on LinkedIn.
References
GIBBONS, Sarah; SUNWALL, Evan. The Return of the UX Generalist. Nielsen Norman Group, Mar 28, 2025. Available at: https://www.nngroup.com/articles/return-ux-generalist/. Accessed on: Jan 28, 2026.
MORIN, Edgar. The seven knowledges necessary for the education of the future. Paris: UNESCO, Oct. 1999. Available at: https://www.psychaanalyse.com/pdf/EDGAR_MORIN_LES_7_SAVOIRS_A_L_EDUCATION_DU_FUTUR.pdf. Accessed on: Jan 28, 2026.
STRYKER, Cole. Large language models. IBM Think, [n.d.]. Available at: https://www.ibm.com/think/topics/large-language-models. Accessed on: Jan 28, 2026.
ATTORESI, Massimo. Neuro-symbolic artificial intelligence. European Data Protection Supervisor (EDPS) — TechSonar, [n.d.]. Available at: https://www.edps.europa.eu/data-protection/technology-monitoring/techsonar/neuro-symbolic-artificial-intelligence_en. Accessed on: Jan 28, 2026.
CERVO, Matheus. Digital repositories for open data from anthropological research: a case study of the BIEV UFRGS. 2022. 178 p. Dissertation (Master’s in Communication) — School of Library Science and Communication, Federal University of Rio Grande do Sul, Porto Alegre, 2022. Available at: http://hdl.handle.net/10183/235396. Accessed on: Jan 28, 2026.
Design careers in the Age of AI: specialize or generalize? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
This post first appeared on Read More

