The horrors of designing for omniscience

What happens when systems decide the human is ‘all-knowing’

Cover design with title, ominous, with illustrative eyes peering through fur

Ever click a button and have a system misbehave with no warning, feedback, or a way to undo, only to feel the repercussions minutes or a few days later?

This is what I’ve coined “Designing for Omniscience” (and the horrors thereof). It’s what happens when a system assumes the human on the other end is ‘all-knowing’ and proceeds in silence. This arrogance of finality can scar (or scare) and have real world consequences beyond the screen, and in some cases, can be catastrophic.

Many systems are accidentally designed to act with certainty. I’ll unpack the hidden ghosts of imperfection in interfaces we rely heavily on to handle our finances, ensure smooth healthcare transactions, make online purchases, and govern how we secure our homes and travel.

Horror #1: Design systems devoid of humanity

To be human is to make mistakes. But every product makes a choice: to create space for human error, or silently treat it as intent. What happens when systems act in such a way that the human is blamed for the system’s oversight? Earlier this month, I found myself in this scenario with a healthcare portal.

It’s common knowledge through our collective experience as patients that some medical software is a patchwork of older systems and code at best. Don’t be surprised if there are cobwebbed corners of COBOL, responsible for critical system functions behind the curtain in municipal and financial software that you rely on.*

Both healthcare professionals and patients who rely on these tools are forced to use software that feels like a journey to the infancy of the internet. Part of the problem is that the software is optimized for the larger healthcare company at best.

The problem with optimizing for one persona is twofold:

  1. The system is developed for the licensee’s benefit. Designed for data compliance, billing accuracy, and liability management, leaving a gap for the lived experience of patients (not to mention the most vulnerable populations seeking care);
  2. The system is brittle. When those least equipped (elderly patients, caregivers, busy parents managing entire family care), must navigate these systems, errors turn into billing nightmares or missed care, creating unnecessary burdens due to a system that assumes the patient is all-knowing.

For nearly a decade, I used a popular digital payment service for online transactions. Adding a virtual card as one of my payment methods in my healthcare provider’s portal, I wasn’t aware that the system would decline payment despite a funded checking account. There was no alert, warning, or feedback provided. I didn’t need to triple check on a card in the portal before payment, nor did I get a signal that anything could go awry.

Rough sketch of the software’s Billing and payments page, with no update, read, or delete experience to verify or change payment.
Rough sketch of the software’s Billing and payments page, with no update, read, or delete experience to verify or change payment.

The first time, I assumed a temporary glitch in the portal since I’d never had problems with utilization. A day after the 2nd decline, I received formal communication framing a routine billing issue in alarmingly severe terms. Later, I discovered, the system allows users to upload payment methods that it cannot always reliably process due to privacy and compliance regulations that virtual cards are not held against. The typical user will never have this knowledge.

Rough sketch of ‘Add card’ interface. Defaulting a new card is unnoticeable and prechecked.
Rough sketch of ‘Add card’ interface. Defaulting a new card is unnoticeable and prechecked.

This is a result of “Designing for Omniscience.” The software introduces a lived experience with a horrifying threat for the end user by prioritizing transaction finality and legal defensibility over people and their well being. The most vulnerable populations won’t experience ‘Designing for Omniscience’ as minor bug but as a systemic failure, one with no business incentive to fix. It’s software built on the idea that the intended beneficiary is 100% responsible for an unvalidated click.

Horror #2: Granting unearned ‘God mode’

Imagine you’re facing the cockpit of a commercial aircraft. These systems are built on a design ethos that assists users in their decision making, and have set up parameters to check human choices given environmental factors. Now, consider for a moment, while activating another control, you accidentally activate the thrust reverser, a device that redirects engine power to slow down on the ground. Sensors confirm when they can be deployed on the ground, but if activated during flight, the system does more than display a warning; it prevents you from taking the action. The system design for aeronautics employs situational awareness and anticipates human error to protect everyone on board.

In 2021 a video-on-demand streaming service sent a blank test email, subject line: “Integration Test Email #1” to 6 million subscribers, bypassing the internal test list. The mass mailing was sent by an intern who likely received insufficient system feedback when confirming, with no perceived consequences before clicking “Send.” Ask any Site Reliability Engineer (SRE) about a failsafe, and they’d wax lyrical about the concept of a “Game day.” A simulation environment that enables practicing for disasters and the ability to run various tests in a controlled learning environment before an incident.

The company’s help team took to a social media platform and apologized for the incident, but called out its intern.

We mistakenly stend out an empty test email to a portion of our HBO Max mailing list this evening. We apologize for the inconvenience, and as the jokes pile in, yes, it was the intern. No, really. And we’re helping him through it. — @ hbomaxhelp
Twitter post from June 2021

What they didn’t expect was thousands of industry professionals sharing their stories of mistakes made with the tools they use to carry out their jobs. Most related to posts confirming they too experienced poorly designed hands-off systems, treating the “Send” button as low, if not no friction. In this case, the software assumed the user understood the volume of the distribution list chosen and confidently didn’t need draft mode or a sandbox, following the send command.

Being able to select a full list of subscribers from a test account is nerve wracking; then to hit send with no fail safe, is a design flaw with rightfully placed anxiety on the user. Here, the system assumed a level of confidence in the intern, who likely hasn’t garnered this level of trust, yet was afforded an unusual amount typically granted a senior or seasoned manager driving a marketing platform.

Without parameters like role-based access control (RBAC), there’s a projected fallacy of perfect knowledge onto humans by software, giving free rein and superpowers over decisions that can cause a ripple effect of issues. Imagine a system design deciding that a newly onboarded user would never confuse a test for six million subscribers.

Sketch illustration of Twitter posts from tech professionals, filled with support.
Support on Twitter from other tech professionals. https://x.com/rakyll/status/1405752437286133760, https://x.com/carotechie/status/1406070141604052994, https://x.com/chorne_/status/1405917919108685825

Horror #3: Speed over protection

Imagine navigating a workflow with a wall of input fields and checkboxes to complete a task involving enormous sums of money, with no final summary or warning about the large sum of funds about to transfer hands.

Image of the hired financial management software.
Source: United States District Court Southern District of New York 2021, Figure 1

Financial tools, if not designed with the end user in mind, can be a minefield of detrimental UX challenges. In 2020, a financial institution intended to make a $7.8M interest payment to its lender. The institution employs a loan operating system to handle its financial transactions. The software was older and inherently complex. The interface mistakenly allowed a wire disbursement to the full principal, shy of $900 million.

A built in pause: When purchasing my home, before receiving a cashier’s check for an amount that my financial institution saw as unusual for my spending pattern, they paused. Seeing the recipient was an Esq., they assumed it was for a home down payment. The banker asked if it was for a down payment, and gave me a surprising, heartfelt congratulations when I confirmed. This subtle, in person fact checking process was designed with humans and connection in mind to protect the individual and the institution. The banking software missed this layer of contextual awareness, and the opportunity to flag high stakes was missed. This pause can catch errors before they become legal pain.

The product teams designing against omniscience today,

  1. Create a pattern of verifying and communicate with context. Affirming: “Are you sure? You’re sending an email to 6M subscribers” or “$900M will be funded toward principal. Is this information correct?” allowing multiple people to check the summary.
  2. Enforce awareness and constraint. Building safe systems that warn you and prevent errors before they happen. When enforcing constraints, the system is designed to collect data for improved contextual awareness and pattern recognition over time, circumventing a crisis.
  3. Allow for good service management. They know that not every user needs access to ALL commands. Approvals and governance give far more freedom, reinforcing a culture of psychological safety in organizations, big or small.

Every system’s design pattern carries the weight of necessary decision making, a decision about who carries the burden of cognitive load. It questions, “Will this flow remove the final confirmation for powerful commands?” “Will every weighted command be granted by default?” “Can we give users a choice on which failures to silence?” When designing, diverse teams can choose differently, building systems that balance power, protection, and forgiveness, making the choice for more efficient and safe software. The alternative gears us toward a future in which the very technologies built to serve us, as we quickly make advances, create larger implications if they are designed to continue to see users as omniscient. That’s the greatest horror of all.

*Source:
NJ needs volunteers who know COBOL… (2020), + NYDOL
If COBOL is so problematic, why does the US gov’t still use it? (2025)


The horrors of designing for omniscience was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More