St. Augustine and AI’s false promise

An ancient diagnosis for a modern delusion.

Saint Augustine by Philippe de Champaigne
Saint Augustine of Hippo with his restless heart pierced by the light of Truth, painting by Philippe de Champaigne, 17th century

The current narrative around artificial intelligence is built on a familiar promise of better predictions and outcomes. AI is positioned as a system that can reduce uncertainty and move us closer to an optimal order. Whether it is hiring algorithms, recommendation systems, risk models, or generative tools, the assumption is that a refined enough system will produce results that are not only effective but aligned with what is defined as good. The problem is that this assumes the standard of the good itself is stable, which it is not. Every system requires a prior decision about what counts as good, and that decision reflects a set of values rather than any access to a fundamental truth.

Saint Augustine of Hippo (354–430 CE) wrote amid the collapse of Roman institutions, a crisis he understood not as a failure of administration, but as a symptom of deeper disorder. While earlier philosophers pointed to external causes, Augustine located the source of decay within the human will. Drawing from the psychological insights in his Confessions, he argued that human desire is not inherently corrupt, but often misdirected. Because a society is defined by what its citizens collectively love, a misalignment in those loves, valuing earthly glory over enduring goods, renders any system unstable.

In City of God, Augustine develops this insight through two opposing orientations. The City of Man is built on self-love and the libido dominandi, the drive for mastery. Without a higher ordering principle, it cannot stabilize itself, relying on goods such as security or prestige that remain insufficient. The City of God, by contrast, is oriented toward the divine through ordered love. These are not physical places, but interwoven moral conditions present within every institution. Human order falls short not because it lacks capacity, but because it is shaped by what it ultimately values.

Applied to human systems, including those mediated by technology and AI, this creates a structural limit that no system can resolve. The standard of the good exists beyond what humans can fully define, yet every system must still operate using partial and competing definitions of it. The result is persistent instability. Humans are drawn toward the idea of perfect order, even as everything they build remains constrained by the orientation of their desires.

Artificial intelligence does not escape this constraint. In fact, it intensifies it. AI systems function through optimization, but optimization is never neutral because it requires a prior definition of what counts as good. The system can only pursue the version of order it has been given. If engagement is treated as the good, the system will optimize attention. If efficiency is treated as the good, it will sacrifice what slows it down. The system can function correctly while still reproducing a distorted idea of what should matter.

That gap is not accidental. For Augustine, the disorder of the City of Man is rooted in disordered love — its citizens do not lack intelligence or effort, but their loves are aimed at the wrong things, in the wrong order. This is where the Augustinian lens becomes more precise. Augustine’s concept of ordo amoris, or the order of love, clarifies that order is not simply structure or efficiency, but a hierarchy of what is valued and what is treated as an end.

To speak of order is to ask what receives priority and what the system ultimately serves. The issue is not simply that AI reflects human priorities. It is that it reorients how those priorities are reinforced. Once a value becomes a metric, it gains the appearance of objectivity.

Augustine frames this problem through the distinction between uti and frui in On Christian Doctrine. Uti refers to what is meant to be used as a means, while frui refers to what is meant to be enjoyed as an end. The failure is not desire itself, but its misordering. A tool is not problematic because it is useful. It becomes dangerous when its usefulness is mistaken for an end.

AI systems accelerate this misordering by clothing utility in the guise of finality. Tools designed for use begin to take on the role of authority when we treat their outputs as conclusions rather than inputs into judgment. This shift happens as the internal logic of the model begins to set the boundaries for our own decisions.

If a hiring model prioritizes specific keywords then our understanding of a qualified candidate narrows to meet those keywords. If a recommendation system highlights certain topics then our sense of what is relevant is shaped by that selection. The tool stops being a window to help us see the world and instead becomes the frame that decides what is worth seeing.

The pursuit of artificial general intelligence (AGI) reflects the same assumption at a larger scale. This matters because it exposes the fantasy beneath the broader AI narrative. The belief that a sufficiently advanced system could resolve the instability of human judgment. However, more machine intelligence does not settle disagreement about what should be valued. It amplifies whatever definition of value has already been built into the system.

This dynamic is not abstract. It appears in the way AI systems operationalize human judgment. As reporting in MIT Technology Review has shown in its coverage of algorithmic bias, these systems do not eliminate bias but formalize it, embedding existing assumptions into scalable systems that appear objective.

What looks like improved accuracy is often the repetition of prior decisions at scale rather than the correction of them. In Augustinian terms, AI does not create disordered love, understood as what you’re oriented toward. It stabilizes it, legitimizes it, and makes it more difficult to recognize.

This is also true of innovation, often framed as movement toward improvement, as if what counts as improvement were not itself defined by a prior conception of the good. AI accelerates this dynamic by making priorities measurable, scalable, and easier to justify. Metrics replace judgment and outputs replace deliberation. Systems feel more precise, but that precision rests on normative assumptions that remain unexamined.

A secular reader might resist this framing. If the argument depends on a transcendent standard of the good, then without theological commitment, does it reduce to the modest claim that values are contested? That objection has real force, but it misses the structural point Augustine is making. The argument does not require accepting his theology to recognize the underlying problem.

What philosophers like Herbert Simon identified as bounded rationality — the simple fact that human cognition is limited and always operates within inherited assumptions — produces the same structural gap Augustine describes, without requiring any theological premise.

Every system of optimization encodes a hierarchy of value, and that hierarchy cannot be justified by the system itself. Whether one frames that limit as a consequence of original sin or as a consequence of cognitive finitude, the practical implication is identical: no system can self-authorize the values it pursues. The theological framing gives the argument its depth and its urgency, but the diagnostic insight survives translation into secular terms.

St. Augustine did not reach this conclusion dogmatically. He engaged deeply with skeptical arguments that questioned whether certainty was possible at all, took them seriously, and rejected them as self-defeating. His position emerges from that confrontation, not from insulation against it.

What that confrontation produced was not a rejection of systems but an account of their proper role. The error is not building tools but mistaking them for ends. Earthly goods are real but limited. They are meant to be used, not treated as substitutes for a higher order they cannot reach. Applied to AI, the ethical task is to preserve the distinction between what a tool can help us do and what only judgment can determine.

The first priority is the restoration of deliberation. AI outputs should inform reasoning, not replace it. This requires organizational structures where human judgment remains the authoritative step, not a rubber stamp at the end of an automated pipeline. When a hiring model scores a candidate, when a risk system flags a loan application, when a content tool drafts a recommendation — the output is an input, not a conclusion.

The design of AI tools should keep judgment visible and keep the humans exercising it accountable. Where that accountability is dissolved, what is lost is not just accuracy but the moral seriousness that judgment requires.

The second is the visibility of values. Every optimization encodes a definition of the good that should be surfaced and contested. This means treating the value choices embedded in AI systems as political and institutional decisions, not technical ones. Researchers working on algorithmic accountability have argued that the framing of AI as a neutral optimizer is itself a political choice — one that removes those choices from democratic scrutiny.

When a system defines what a qualified candidate looks like, or what content deserves amplification, or what neighborhoods carry what risk, those definitions require accountability. They should be named, examined, and open to revision. Not because the systems are necessarily wrong, but because the authority to decide what counts as right cannot be quietly delegated to a model.

The third is institutional humility. Stability should not be confused with legitimacy. A system that produces consistent results may simply be repeating the same distorted order with greater confidence. Institutions that rely heavily on AI outputs need regular mechanisms for questioning whether those outputs reflect the right priorities, not just whether they reflect the stated ones. Consistency is not evidence of correctness. In Augustinian terms, a well-ordered machine can still be oriented toward a disordered end. The efficiency of the system is never an argument for the adequacy of its goals.

AI does not move us closer to an ideal order because it cannot resolve the limits of human judgment or stabilize what we take to be good. It makes the City of Man more efficient, more scalable, and more convincing. That is not a reason to abandon the technology. It is a reason to resist treating it as a substitute for human judgment. The mistake is not building the system. It is forgetting that a tool meant for use cannot determine what is worth pursuing.

Don’t miss out! Join my email list and receive the latest content.


St. Augustine and AI’s false promise was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

 

This post first appeared on Read More