A Hippocratic Oath for tech… with teeth
From Frankenstein’s dilemma to the Greek agora, history offers blueprints for building tech industry legitimacy.

I was reading Frankenstein to my son last week (as one does in the days leading up to Halloween). As he fell asleep, I was struck by a thought. The “move fast and break things” ethos of the last two decades in tech is remarkably similar to the core themes of the 200+ y/o Shelley classic. The story of a creator obsessed with his own genius, blind to the consequences, who unleashes a powerful, uncontrolled creation into the world and then abdicates all responsibility for the harm it causes? An apt comparison, indeed.
Victor Frankenstein’s great folly lay not in his ambition, but in his abdication. Horrified that he “had turned loose into the world a depraved wretch,” he ran. He left his creation to be feared and misunderstood by a society that was not prepared for it.
This is the dilemma of the modern tech profession. We, the architects of the digital commons, have acted with the same hubris. We build systems optimized for profit and engagement, unleash them upon society, and then hide behind a structure of diffused responsibility when systemic harms emerge (teen mental health crisis, political polarization, mass addiction, erosion of privacy).
Which brings me to another idea that occurred around the same time: “Terms of Service.” No, I’m not talking about the little box you check, having not read the thousands of words of dense, technical jargon. That is a symptom of the disease.
I’m talking about flipping this very concept on its head- an enforceable “Terms of Service” pledge to be taken by tech professionals (myself included, as a UX and brand strategist): an oath to address “Frankenstein’s dilemma” and finally take responsibility for our (co)creation.
From Open Commons to Unregulated Market
This call to action is made urgent by the industry’s own evolution. The ethos of the early internet was rooted in the promise of a decentralized, open commons: a “freedom to connect, to innovate, to program, without asking permission.”

Unfortunately, that ethos has been superseded. The commons has been enclosed and commercialized. Today’s dominant platforms are not public utilities but engines of monetization, driven by an attention economy that treats human focus as its primary resource.
In his book Technofeudalism, Yanis Varoufakis describes how
“cloud capitalists… have mutated into something far grander: techno-lords presiding over cloud fiefs and accumulating a new form of rent- cloud rent-extracted from a new class of serf: the cloud serf.”
The fundamental change in purpose- from a tool for communication to a system for extraction- is precisely why the field is overdue for an ethical overhaul.
Precedents for Accountability
This is not the first “wild west” in professional history…
Imagine you are a stonemason in ancient Greece, struck by a fever that will not break. You seek help in the agora, but the marketplace of “healers” is a chaotic spectacle.
One man offers a poultice of herbs mixed with magical charms. Another, a priest, insists your illness is a divine punishment and demands a sacrifice. A third offers a potion, but you’ve heard whispers that some such men are not above accepting payment to administer poison. Who do you trust with your life when there is no standard to distinguish a healer from a charlatan? This was the reality of the “medical profession” in Greece, circa the 5th century BCE.
Enter the Hippocratic Oath: a radical intervention with a practical purpose and a lasting impact. A group of practitioners set themselves apart with a binding commitment to “abstain from whatever is deleterious and mischievous” and to act in the patient’s best interest, forging a pact of trust with a public desperate for it.

Centuries later, other professions forged their own pacts. The Ritual of the Calling of an Engineer, along with the conferral of an Iron Ring, was born from the 1907 Quebec Bridge collapse, which killed 75 workers due to design flaws. While the legend that the first rings were made from the bridge’s wreckage is an embellishment, its endurance speaks to the profession’s embrace of a foundational lesson in humility. Similarly, the legal and financial professions bind practitioners as ‘officers of the court’ or with a ‘fiduciary duty’, a legal standard obligating them to act in their client’s best interest, not their firm’s.
In all cases, these oaths provide a higher authority to which the professional must answer, overriding the demands of a boss or a project budget. The tech industry, while relatively nascent, operates as a great exception.
Built on what Harvard Professor Shoshana Zuboff defines as
“the unilateral claiming of private human experience as free raw material for translation into behavioral data,”
our current model of “surveillance capitalism”(Zuboff) is not an inevitable result of technology, but a specific “economic creation” that has thrived in secrecy and in the absence of law and regulation.
The resultant system of unchecked growth without accountability now translates into our crisis of governance. Our informational reality and civic lives are predominately ruled by algorithms optimized not for truth, but for profit. In this economy, polarizing content is the most engaging and therefore the most profitable. Research confirmed that when Facebook’s algorithm (optimized for user engagement) was changed to boost posts with high emotional reactions, it “inadvertently promoted toxic and anger-inducing content.”

Many compare the tech landscape to a “pre-reckoning” Big Tobacco. A more useful comparison, however, might be the pharmaceutical industry. After the thalidomide disaster of the 1960s caused thousands of birth defects, governments began requiring “substantial evidence” of a drug’s safety and efficacy before it reaches the public.
We do not knowingly allow drug companies to “move fast and break things” with public health. Yet with technology, we continually do. Corporations treat multi-billion-dollar fines as a fractional “cost of doing business.” In 2023, the five largest tech firms were fined a combined $3.04 billion. They earned enough revenue to pay that entire sum in just over seven days.
The Architecture of Diffused Responsibility
The modern tech company is a marvel of diffused responsibility, a psychological phenomenon where individuals in a group feel less personal accountability. The UX designer optimizes for engagement. The engineer for stability. The product manager for growth. No single person can be held responsible for the systemic harms that emerge.
This is the system’s genius: it scales profit and impact while atomizing responsibility.

A new oath, therefore, cannot be a passive pledge. It must be an active, professional tool that empowers the practitioner to fight this diffusion. It must grant tech workers the formal standing to ask questions, the right to see data, and the power to refuse; not as a malcontent, but as a professional upholding their code.
Consider the designer asked to create a notification system using intermittent variable rewards (the same mechanism as a slot machine) to maximize engagement. Ethical UX design principles dictate that a practitioner must “do no harm” and put the user’s needs first, but it would require something closer to a “Mandate of Inquiry” to fully empower the designer to ask: “Have we researched the potential impact of this on teen anxiety? Can I see that data before proceeding?” Under a protected oath, the designer is no longer a troublesome employee, but a professional upholding a standard.
It is crucial to distinguish such a framework as a set of enforceable rights, not as another aspirational pledge. The landscape is littered with “feel-good” oaths and top-down corporate principles published by the very companies causing harm. These top-down models are ultimately toothless because they disempower the individual. They are tools for legal compliance and public relations, not for the practitioner in the “engine room.”
Even our most evolved professional codes, such as those from the Association for Computing Machinery (ACM), are largely silent on the employer-employee power dynamic. They are excellent at describing a practitioner’s duty- to prioritize the public good, to “avoid harm”, and to “challenge unethical rules,” but they provide no tool or protection to do so against a conflicting order from management.
In other words, they tell the engineer to wave a flag at the runaway train, but offer no way to stop it. The practitioner who follows this duty is often the one who gets steamrolled, fired as a “dissenting employee” with little recourse.
The following Mandates, therefore, propose a fundamental shift: from a passive duty to an active power. They are designed to arm the practitioner for that inevitable moment of conflict. While as yet imperfect and conceptual, they are intended to spark necessary conversation about what a framework of enforceable and protected ethics must look like, and how we, as a profession, can begin to implement it.

The Architect’s Mandate: A Proposed Code
A functional oath for a tech professional would be less a pledge and more a formal declaration of professional obligations:
I. The Mandate of Inquiry
I shall have the protected right to ask “Why?”; to perform a root cause analysis on the intent behind the features I am asked to build. I will not be satisfied with metrics of “engagement” as a substitute for human value.
- This mandate is a direct antidote to the “architecture of diffused responsibility”. While existing codes suggest practitioners should “reflect upon the wider impacts of their work” or “talk across teams”, this provides a formal, protected right to demand answers and data, piercing the veil of organizational siloing.
II. The Mandate of Consequence
I shall have the right to access any and all research on the likely human impact of my work before it is deployed. I pledge to place the long-term well-being of the user above the short-term metrics of the platform.
- This right is a novel mechanism for accountability. While human rights organizations often partner with engineers to analyze data and ethical frameworks discuss organizational “transparency”, no code grants the individual practitioner an affirmative right to access internal impact research before a product’s release.
III. The Mandate of Refusal
I shall not be compelled to design, build, or deploy systems that I believe, in my professional judgment, are designed to exploit human vulnerabilities, erode agency or privacy, or amplify division. I will have the protected right to refuse such work without fear of retaliation.
- This mandate radically expands the existing “right to refuse” work. Current legal protections are almost exclusively limited to situations of “imminent danger” of “death or serious injury”. This proposes a new, professional standard of refusable psycho-social harm, and demands explicit protection from retaliation for what amounts to secular, professional “conscientious objection”.
IV. The Mandate of Precedence
I pledge to hold my duty to the public- their safety, their agency, and their mental well-being- as paramount, above my duty to my employer’s stock price or my project’s quarterly goals.
- This is the foundational principle of all mature professional codes. The National Society of Professional Engineers, for example, states that the “highest ethical obligation is to protect the public health and safety”. This mandate affirms that this established principle must finally be applied to the digital world.
V. The Mandate of Testimony
I shall have the protected right to bear witness to systemic public harm caused by a product I helped create, even after its release. My professional duty to warn the public shall supersede any non-disclosure agreement or corporate policy, and I shall not be subject to retribution for this testimony.
- This is a potent synthesis of legal and ethical precedent. It argues that a tech professional’s “duty to warn” the public of danger, a duty that supersedes confidentiality in medicine and traditional engineering– must also supersede a non-disclosure agreement or policy. While whistleblower laws already protect reporting of illegal activity and courts can compel testimony, this mandate seeks to extend that protection to testimony regarding systemic public harm that is not yet explicitly illegal.
VI. The Mandate of Audit
I shall have the protected right to audit the data sets and models I am asked to use before deployment. I will have the right to test for, document, and mitigate algorithmic bias, even if it conflicts with a project’s timeline or stated goals.
- This mandate targets a well-documented harm: AI systems that perpetuate and amplify societal biases. An algorithm is often “only as good as the data that was used to train [it]”. While some laws require top-down corporate “bias audits”, this mandate grants the right to the practitioner, empowering them to test models and retrain or suspend biased algorithms as a matter of professional practice.
VII. The Mandate of Sustainable Design
I shall have the protected right to refuse work that I believe, in my professional judgment, intentionally implements planned obsolescence or demonstrably contributes to environmental degradation.
- The tech industry is a massive contributor to e-waste and environmental harm, often by design (a “deliberate industrial strategy”). This mandate extends the “right to refuse” beyond immediate human danger to include systemic environmental danger, empowering engineers to advocate for sustainable “green” engineering principles and refuse to participate in practices they know to be destructive.
The framework outlined here is not a plea for rebellion. If the “move fast and break things” era was marked by a defiant, adolescent, Frankenstein-esque hubris, this proposal is (conversely) a declaration of industry maturation. It is our chance to act on what Frankenstein only briefly glimpsed long after abandoning his creation, when he confessed,
“For the first time, also, I felt what the duties of a creator towards his creature were…”
A codified set of principles, with the ultimate goal of legal enactment, is what separates a collection of atomized employees from a unified profession. A single person can be fired; a profession with a shared, legally-enforceable code of conduct has power.
We are not at the mercy of an unregulated “tech monster” beyond our control. We are its architects and gatekeepers. This is the moment we stop abdicating and choose to fulfill the duties of a creator.
This is the blueprint. It is time to take ownership.
I invite a public critique of these mandates from all professionals in the field. Join the conversation in the responses below.
A Hippocratic Oath for tech… with teeth was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
This post first appeared on Read More

