When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

There is a photograph from 1930s East London that should be mandatory viewing for anyone anxious about AI taking their job. It shows Mary Smith, a “knocker-up” — a woman whose entire profession consisted of walking the streets at dawn, shooting dried peas at bedroom windows with a long bamboo pole to wake workers for their factory shifts.

Photograph from 1930s showing Mary Smith, a “knocker-up,” wielding a bamboo pole with dried peas, shooting them at bedroom windows to wake factory workers.
Mary Smith, a “knocker-up” in East London (1930s)

Mary charged sixpence a week for this service. Then the alarm clock arrived, and Mary’s job vanished.

Do we mourn the knocker-up today? Do we rage against the tyranny of the alarm clock? Of course not. Because what happened to Mary is what has happened throughout every technological revolution: certain tasks became obsolete whilst entirely new categories of work emerged.

Yet here we are in 2025, gripped by the same ancient panic. AI is coming for our jobs, we are told. The robots will replace us. The future is bleak.

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried.

The evolution we refuse to see

Human work has always been in flux. In nomadic societies, we hunted and gathered. Agriculture tethered us to land and rhythms of seasons. Industrialisation moved us into factories, trading physical labour for wages. The information age shifted us again — this time from brawn to brain, from making things to managing data, information, and knowledge.

Each transition has redefined not just what we do, but what remains uniquely human about work. We moved from physical exertion to cognitive processing. Now we have built what I call “deepware” — neural networks and machine learning systems layered atop our traditional software and hardware.
These architectures can process, pattern-match, and produce at speeds we cannot fathom. Yet in our rush to innovate, we have convinced ourselves that deepware can carry the weight of responsibility our wetware — our human brains and nervous systems — seems increasingly willing to surrender. We are moving again, but this time the question is whether we are moving forward or simply moving away.
Because this time, the shift is not from physical to mental. It is from execution to responsibility.

A horizontal timeline infographic titled “The Evolution of Human Work.” It shows five large coloured circles connected by a black arrow, each representing a stage of labor. Nomadic (Hunting): “Physical survival, hunt and gather.” Agricultural (Farming): “Tied to land and seasonal cycles.” Industrial (Factories): “From muscle to machine labour.” Information (Knowledge): “Brains over brawn: data and knowledge.” Deepware (Responsibility): “Execution shifts to responsibility and ethics.”
The evolution of work, from hunting to responsibility.

As Arvind Narayanan at Princeton observes, jobs are not monolithic entities. They are bundles of tasks. Some tasks are routine and susceptible to automation. Others require distinctly human qualities: judgement, ethics, accountability, wisdom.

The question is not whether AI can perform tasks. It demonstrably can, often better than we can. The question is who takes responsibility when those tasks produce consequences in the real world.

Outputs are not outcomes

This distinction matters more than most organisations realise.

Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.
AI can generate outputs. It cannot, however, create outcomes. Because outcomes require something deepware fundamentally lacks: the wetware capacity for responsibility.

Computers cannot be held accountable, as IBM recognised in 1979. That truth has not changed. What has changed is our willingness to pretend otherwise.

IBM’s note stating: A computer can never be held accountable. Therefore a computer must never make a management decision.
IBM note, 1979

The accountability vacuum

We have already seen the cost of this pretension. A lawyer in the US submitted AI-generated briefs filled with fabricated cases. A trial in Australia was delayed because no one verified the AI’s outputs. These were not technological failures — these were failures of human responsibility.

When a professional says “the AI recommended it,” they are engaging in the same moral abdication as someone saying “I was just following orders.” Both statements attempt to transfer accountability to something — or someone — incapable of bearing it.

This is the dangerous seduction of deepware: it offers us the illusion that we can delegate not just tasks, but responsibility itself.

We cannot. We must not.

The future of work, then, is not about humans competing with AI for task execution. It is about humans stepping into roles where we exercise oversight, judgement, and accountability over the outputs that AI produces. This is not replacement. This is elevation.

But only if we prepare for it.

The skills deepware cannot replicate

Here is what the regulations already understand, even if organisations do not: responsible AI demands responsible humans.
The EU AI Act mandates human oversight, transparency in decision-making, and clear accountability chains. Not as bureaucratic burden, but as fundamental safeguard.

Because here is the uncomfortable truth about deepware: it learns from historical data, which means it perpetuates historical biases.

Amazon’s recruitment algorithm discriminated against women. Workday’s hiring system did the same with 40+ candidates. MIT research found that AI healthcare systems told female patients they were more likely to “manage illness at home”. Judicial algorithms found Black defendants more culpable.

These are not glitches. These are features of systems trained on data that encoded decades of invisible discrimination.

So when we talk about the future of work, we are really talking about the future capacity of humans to think critically about the machines we build. To question their outputs. To interrogate their processes. To identify bias. To imagine consequences. To demand better.

This requires a different skill set entirely:

Critical thinking: the ability to evaluate claims, identify assumptions, and distinguish correlation from causation.

Systems thinking: understanding how components interact, how changes ripple, how second and third-order effects emerge.

Lateral thinking: seeing connections across domains, applying insights from one context to another.

Scenario planning: running mental simulations of possible futures, stress-testing decisions before implementation.

Consequence thinking: asking not just “can we?” but “should we?” and “then what?”

Notice what these skills have in common. They are uniquely human. They require wetware that is fully engaged, not atrophied from disuse.

The extension that could replace us

Marshall McLuhan wrote in 1967 that “all technologies are extensions of our physical and nervous systems to increase power and speed.
The wheel extended our legs. The telescope extended our eyes. Now deepware extends our cognitive capacity.

But here is McLuhan’s warning, implicit in every page: when an extension becomes too powerful, we risk forgetting the limb it was meant to serve. We outsource function, and eventually, capability atrophies.

Consider the calculator. Brilliant tool. But an entire generation now cannot perform basic arithmetic without one. The extension has replaced the skill.

Now scale that to deepware. If we allow AI to think for us — to analyse, to recommend, to decide — without maintaining our own capacity to audit and override, we risk something far more dangerous than job loss. We risk becoming obsolete not because machines replaced us, but because we voluntarily stepped aside.

The choice that defines the next era

The future of work is not a future without humans. It is a future where humans do fundamentally different work: the work of being responsible.

This means verification before deployment. Critical thinking over speed. Systems thinking over isolated innovation. Accountability at every level.

It means training a generation not just to prompt AI, but to interrogate it. Not just to deploy models, but to govern them. Not just to consume outputs, but to construct meaningful outcomes.

It means recognising that as our deepware grows more sophisticated, our wetware must grow more rigorous. Every layer of AI we add demands an exponential increase in human oversight.

This is not optional. This is existential.

Mary Smith lost her job when the alarm clock arrived. But she did not lose her capacity for work — she adapted. Her children worked in offices, not on streets. Her grandchildren worked in roles she could never have imagined.

The same will be true for us, but only if we make a conscious choice: to remain conscious. To refuse the abdication of responsibility that deepware makes so seductively easy. To recognise that outcomes require human judgement in ways that outputs never will.

Our wetware remains the most sophisticated technology we possess. In the age of deepware, it is also the only thing preventing a future where machines own not just our code, but our agency.

The next time someone shows you what AI can do, we need to ask them something more important: who is responsible when it goes wrong?

Because in the end, the future of work depends not on what machines can produce, but on whether humans remember that we, and only we, can be held accountable for what happens next.

For a deeper exploration of the deepware-wetware framework and why responsible AI fundamentally depends on active human cognition, read: When Deepware meets Wetware: the uncomfortable truth about responsible AI


When machines make outputs, humans must own outcomes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More