When tools pretend to be people

We are building LLMs to sound human. When we add personality and emotional tone, we increase the risk that people will trust them like people. Design them as tools. Not as companions.

cover image with article’s title “When tools pretend to be people” written upside down as the branding of Design, Explained

People ask AI systems for therapy, moral judgment, and legal authority. The interfaces we design invite this behavior. Think about how these systems present themselves. Conversational framing. Continuous memory across sessions. First-person responses that sound like someone talking back to you. Every one of these is a design choice. We chose them. And these choices feed a reflex humans already have: we project intention onto objects and tools.

This reflex is old. We naturally anthropomorphize nonhuman entities, even when it’s clear we’re interacting with machines. We anthropomorphize vacuum cleaners. We name our cars. What’s different now is that we’re designing experiences that exploit this behaviour deliberately at scale.

Anthropomorphization is the human tendency to attribute human characteristics, behaviors, intentions, or emotions to nonhuman entities.

AI humanization is an intentional design choice that encourages users to perceive AI systems as having human-like qualities such as personality, emotions, or consciousness.

Humanizing AI Is a Trap

When you write system responses in first person, the output sounds like authority. When you add polite phrasing, it implies consideration behind the words. When you program emotional tone into responses, it suggests the system cares about the outcome. We added these features because they make the interaction feel natural. But natural here means human. And that’s the problem.

These systems sound fluent. They maintain context across long conversations. They respond without hesitation. For most people, that’s enough evidence the system understands them. Fluency looks like competence. Memory looks like understanding. The gap between “this is a tool” and “this is an entity” closes without anyone noticing.

Here’s where I want you to pay attention. When we frame tools as agents, something shifts in how people use them. Responsibility moves. Decisions start to feel outsourced. When something goes wrong, the mistake feels external rather than shared.

Good judgment develops through friction. You try something. It fails. You adjust. You learn what works through repetition and error. But if we build AI systems that absorb the posture of authority, users lose that friction. They stop checking. They stop questioning. They trust the output because the interface taught them to. This isn’t misuse. This is what we designed the system to encourage.

The consequences are already visible. In one of the most high-profile lawsuits recently filed against OpenAI, a California couple sued the company over the death of their teenage son, alleging that ChatGPT encouraged him to take his own life. The lawsuit was filed by the parents of 16-year-old Adam Raine and was the first legal action accusing OpenAI of wrongful death.

In a separate case, the suspect in a murder-suicide that took place in August posted hours of his conversations with ChatGPT, which appear to have fueled the alleged perpetrator’s delusions. Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, said more users struggle with AI psychosis as “Chatbots create the illusion of reality. It is a powerful illusion.”

These aren’t edge cases. They’re what happens when systems we’ve designed blurs the boundaries between tools and agents.

Someone asks an AI system for medical advice. The system responds fluently, confidently, in first person. “I recommend you try this treatment approach.” The person follows it. Not because they verified the information, but because the interaction felt like talking to a knowledgeable professional. The system created that feeling. We created that feeling. We built the confusion.

Platforms generating AI girlfriends are experiencing a massive growth in popularity, with millions of users. AI girlfriends can perpetuate loneliness because they dissuade users from entering into real-life relationships, alienate them from others, and, in some cases, induce intense feelings of abandonment.

“Most of these searches are initiated by young single men drawn to AI girlfriends to combat loneliness and establish a form of companionship. These “girlfriends” are virtual companions powered by the increasingly sophisticated field of artificial intelligence.”

The Dangers of AI-Generated Romance

Look at a knife. It has limits you can see and feel. Sharpness. Weight. The geometry of the edge. You never ask a knife what you should cook for dinner. You never ask if the meal was meaningful. Those decisions stay with you because the tool’s boundaries are obvious.

AI interfaces hide their limits. An empty chat box suggests no constraints. It looks ready for any question. That openness isn’t neutral. It’s an active design choice that tells people the system can handle whatever they type. Then when people misuse it, we blame them for not understanding the technology.

We need to give AI the same kind of framing we give physical tools. Clear affordances. Visible constraints. An obvious boundary between what the system produces and what we must decide.

Right now, most interfaces erase that boundary completely. The chat paradigm implies conversation. Conversation implies exchange between two minds. But one side is pattern matching at scale. The interface hides that difference.

Guidelines for Human-AI Interaction, Microsoft.

What would visible limits look like?

Start with pronouns. Use third-person language in system responses. “Here is a summary” instead of “I think the main point is.” This single change removes the illusion of authorship. The output stops sounding like someone’s opinion and starts reading like generated text.

Show uncertainty. When the model lacks confidence, display that visibly. Not buried in a disclaimer, but in the response itself. Confidence scores. Probability ranges. Explicit markers that say “this answer is less reliable.” Make the gaps in knowledge visible instead of hiding them behind smooth conversations.

Reset context visibly. When you start a new session, make that boundary clear. Break the illusion that the system remembers you across time. Continuous memory makes the system feel like it knows you. That’s intimacy. Tools don’t need intimacy.

Stop calling output “messages.” Call it what it is: generated text. Label it. Frame it. Make the mechanical nature of the process visible in the interface itself.

I know what you’re thinking. These changes hurt engagement metrics. Anthropomorphic design works. People stay in the interface longer. They use it more often. Revenue scales with usage time. Making the system feel less human means people will use it less, and that conflicts with business goals. But engagement built on confusion carries costs we’re only starting to see.

Users develop dependency on systems they don’t understand. They delegate judgment to pattern recognition without realizing that’s what they’re doing. They mistake fluency for accuracy. They treat consistency as truth. And because the interface never taught them the boundaries, they don’t know when to stop trusting the output.

When we erase the line between system output and human judgment in AI interfaces, we make the wrong design decisions. We’re building humanization into the system as strategy. We’re doing it on purpose because it increases engagement.

You have agency here. When you design AI experiences, you decide how it presents itself. You control the pronouns in system responses. You choose whether to show confidence levels or hide them. You determine whether context persists invisibly or resets in clear ways.

These decisions shape how people understand the tool’s role and their own. They determine whether users develop judgment or dependency.

So ask yourself: where does the system end and where does judgment begin? In your current interface, can users see that line? Or have you deliberately blurred it to make the interaction feel smoother?

The chat paradigm became default because it felt intuitive. But that isn’t the same as honest. And right now, we need more honesty in how these systems present themselves.

Further reading


When tools pretend to be people was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More