The ship of Theseus paradox in AI-assisted writing

Turns out, the more feelings in your text, the weirder it feels to let AI touch it.

The Ship of Theseus reimagined: a vessel where each wooden plank is replaced with a silicon chip
The Ship of Theseus reimagined: a vessel where wooden planks are being replaced with silicon chips. Generated in Sora.

I use AI writing tools the way most people use spellcheck: casually, constantly, almost without thought. A typo here, a phrasing fix there, a quick “make it flow better.” Over time, it’s become second nature that I write through the machine.

And that’s not unusual. Grammarly alone reports over 40 million users, across 50 thousand organizations and 96 percent of the Fortune 500 (source: Grammarly.com). The habit has gone mainstream, invisible, automatic, and everywhere.

But every so often, I pause over a polished paragraph and feel an odd flicker of detachment. The words look like mine, sound like mine, but they’ve passed through someone else’s hands. Or circuits. Or whatever metaphor fits. How many edits does it take before the voice that returns isn’t me anymore?

It’s the Ship of Theseus. Turns out Greek mythology still works in 2025. Swapping planks on a ship is just the new metaphor for rewriting yourself through syntax. Each replaced plank a synonym, each tightened sentence a substitution, until the vessel of thought sails on. The ship, and the writing, are familiar in shape butforeign in soul.

And this anxiety about “losing your voice” didn’t arrive with ChatGPT. As others have noted, the fear of writing identity slipping away has circulated for years, long before AI became a default tool (see Emma Identity’s 2017 essay on textual fingerprints). What’s different now is how automated the erosion feels.

A Family-and-Friends Field Test

I wanted to understand that uneasy drift between “me” and “machine,” so I ran a small, human-scale test. A handful of volunteers, friends, family, and a few generous redditors who gave five minutes of their lives to a Qualtrics link joined in.

Flowchart showing participants providing three types of writing (casual text, essay paragraph, code snippet), sending each through an AI tool for ten iterative rewrites, and rating on a 1–7 scale how much the output still felt like “theirs.”
Diagram of the study procedure: participants wrote a casual text, an essay paragraph, and a code snippet, then passed each piece through an AI tool ten times and rated how much the rewritten text still felt like their own. Original illustration by the author.

Each person worked across three familiar modes: a casual text, a short essay paragraph, and a code snippet. They wrote something genuine, then passed it through an AI writer of their choice ten times in a row. After each iteration, they rated it on a 1–7 scale: “How much is this still mine?”

The survey itself ran on Qualtrics; I cleaned and visualized the data in R, using a simple mixed-effects model to trace how that sense of ownership decayed over time. Everyone consented, all responses were anonymized, and what’s shown here reflects only the aggregate picture.

Perceived ownership steadily declines with each AI rewrite, dropping fastest for texting and slowest for code.

Turns out, the ownership ratings fell off a cliff around the third or fourth rewrite for texting and essay writing, but held surprisingly steady in code. A spline model (R, lme4, three-knot natural spline) showed the steepest drop early on: texting collapsed fastest, essays followed, code barely flinched.

By the numbers:

  • Texting: sharpest drop of –1.4 points around iteration 3.
  • Essay: a slower, smaller decline (–1.25 points near iteration 5).
  • Code: almost no falloff (–0.3 points around iteration 4).

When asked when their text felt “more AI than me” (below a 4 on the 1–7 scale), people hit that midpoint at iteration 3 for texting, iteration 7 for essays, and never for code.

This aligns with a broader cultural confusion around what “human-written” even looks like. As Aaron Pace recently wrote, the more polished a piece of writing is, the more likely AI detectors are to label it “machine-made.” Today we see a strange modern reversal where professionalism now reads as artificial. My data shows the inverse feeling: the messier and more intimate the writing, the more its owner wants to keep machines out.

What the Humans Said

After the survey, I spoke with a few participants again, some over Zoom, some across a kitchen table. Their reactions lined up quite cleanly with the numbers. The more personal the writing, the more wrong it felt to let an AI touch it.

When it came to texting, everyone hesitated. That space is messy and intimate. You’d use lowercase shorthand, inside jokes, typos, emojis that say more than syntax ever could. Watching an AI rewrite that kind of message felt invasive, like handing your phone to someone else mid-conversation.

Essay writing was easier to surrender. Participants said it felt “professional,” “detached,” something meant to be polished anyway. The AI’s edits felt like a second pair of eyes rather than a replacement of self.

And code, unless you were a computer science student who lives in it, wasn’t personal at all. In fact, most people said they needed AI to even get started, describing the help as “welcome,” “expected,” or “honestly, a relief.” For them, authorship was more functional than emotional. The machine wasn’t rewriting their voice, it was scaffolding their logic.

One person linked their experience to the idea of vibe coding, which, as an IBM article described, is the emerging practice of writing code by talking to AI using natural language prompts instead of lines of syntax. They acknowledged that software engineers often dismiss it as unserious, even lazy, but for them, it was liberating. As a non–computer science student, they said vibe coding made the process less intimidating and more creative.

It echoed something Jasmine McCandless captured: people don’t pick the “best” writing tool, they pick the one that demands the least effort. But effort can manifest in many ways. And one of them is emotional risk. In my study, the more personal the writing, the more people resisted AI intervention. Once the task shifted into professional or functional territory, the comfort gap widened and AI quietly became the low-effort choice.

A person giving a robot directions on what to write
Collaboration between a human writer and an AI assistant. Original illustration by the author.

Takeaways for UX Design

The tension here is becoming infrastructural. René Najera recently wrote about Grammarly’s new Authorship feature, which records your keystrokes and generates a replay of how a document came together. Their point is simple: in an age of ubiquitous AI tools, writers now feel pressure not only to produce authentic work, but to prove they made it. My findings speak to the same impulse from the user side: authorship isn’t just who typed the words, but how much of the process still feels human, intentional, and personally owned.

Maybe the real product insight here is that Grammarly should just buy Copilot and call it a day. One interface to smooth every sentence, human or machine. Beneath the joke, though, is a deeper cue for design: people’s comfort with AI assistance scales with emotional distance.

Texting felt intimate, messy, and human. That’s where participants pulled back. They didn’t want the machine polishing what was meant to be personal. Essay writing lived in the middle zone, where improvement feels professional rather than invasive. Code, on the other hand, was almost depersonalized; help there felt efficient, not existential.

The takeaway isn’t to build less AI assistance. It’s to build context-aware assistance. In spaces where expression carries identity (e.g., messages, creative writing, voice) AI needs to step lightly, amplifying rather than rewriting. In functional spaces such as emails, documentation, and code, it can take the wheel without moral friction.

Designing for this gradient of intimacy means giving users control over how much authorship they’re willing to trade. The future of AI writing UX isn’t about raw capability. It’s about sensitivity: knowing when to finish your sentence, and when to leave your typos alone.

In practice, this could look like:

  • Surface adjustable “AI intensity” controls directly in writing surfaces.
    Let users slide between light-touch suggestions (“fix punctuation only”) and heavier rewrites (“rewrite for clarity”).
  • Use mode-aware defaults.
    If the system detects texting-like language (emojis, slang, rapid back-and-forth), default to minimal intervention. For structured writing (docs, reports, code blocks), default to more active assistance.
  • Show the Δdelta , not just the final rewrite.
    Display what was changed and why. Visible authorship boundaries reduce the feeling of being overwritten.
  • Offer reversible, granular edits.
    Let users accept or reject changes at the sentence or phrase level, rather than forcing an all-or-nothing rewrite.
  • Preserve voice automatically.
    Train style models on the user’s past writing and ensure AI edits mimic that tone rather than flattening it.
  • Avoid “dominant tone” suggestions in intimate spaces.
    Don’t push “professional tone,” “confident tone,” or “assertive tone” suggestions inside chat or messaging contexts. It reads as intrusive.
  • Let users lock sentences or paragraphs.
    If a line feels emotionally important or identity-revealing, users can mark it: “don’t rewrite this.”
  • Respect the emotional labor of writing.
    When edits touch personal content, frame them as support (“Want help clarifying this?”) rather than replacement (“Here’s a better version”).

There’s an argument that emotionally grounded writing acts as a kind of “shield” that AI can’t easily penetrate, the kind of work rooted in lived experience, personal history, or emotional nuance that resists mechanization by its very nature. And maybe that’s the point: the parts of writing that matter most aren’t the ones AI can smooth; they’re the ones only a human can feel.

Alice Ji is a PhD researcher at UIUC’s Institute of Communications Research, studying digital persuasion, attention, and interface design. Portfolio at https://alice-ji.github.io/project.html


The ship of Theseus paradox in AI-assisted writing was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More