
In my last post1 I asked what happens if API tokens become a universal monetary standard — a computational lingua franca that makes AI labor legible, tradeable, and accountable across systems that currently cannot speak to one another. The question that immediately follows is not whether humans can still work, but something more fundamental: How does human contribution remain economically visible in a system whose basic unit of account tracks computational inference? What follows is part playful fantasia, part thinking about a serious problem, without asserting a go plan.
The goal would be equilibrium — a stable state where human beings can thrive rather than merely persist, where the preference is for an improved standard of living rather than bare survival. Human Agency Tokens, or HATs, are the structural answer to this problem — not a supplement to a token economy but the missing piece without which only AI work gets counted. From a long-game global view, the economy needs to be stable, and to be stable, there has to be a floor on human suffering and abundance. Past a threshold, the strain of human suffering is too great, leading to over-wild cycles of disruption and correction.
Toward an Ecology of Economy
The architecture holds together only as a whole in this fiction — three parts, each necessary, each insufficient alone. A Universal Token Standard measures and standardizes AI computational work — the lingua intelligentiae made operational. A Free Subscription to Life provides a baseline allocation: a floor of computational access, dignity without coercion. You can choose not to work and still live. HATs reward verified human contribution above the floor — the premium for irreplaceable consciousness work. Remove any piece and the architecture fails: UTS alone makes human contribution invisible; FSL alone guarantees survival while agency slowly atrophies; HATs alone coerce consciousness work into existence because survival depends on performance.
The architecture becomes clearer by using the analogy of the “water cycle” of evaporation, rain and reservoir. FSL is the reservoir — the floor that remains constant, baseline income funded by the productivity gains AI generates. Loosely, and debatably: UTS is the rain — computational value falling measurably into the system, the substance that gets counted and exchanged. HATs is evaporation — human contribution rising from the reservoir into the broader economy through a process you cannot observe directly, but know occurred because the rain eventually comes. The reservoir can freeze, dry up completely, or refill from an outside source. The weather matters — wind, heat, humidity. It is an ecosystem, not a mechanism.
Calibrating the Controversy of Universal Basic Income
The critical question is not whether FSL exists but how “enough” gets calibrated. We have no moral, ethical or pragmatic consensus on making sure all human beings have access to even the basic daily necessities. The stingy equilibrium — most likely by default — provides synthetic nutrition, minimal shelter, and enough digital entertainment to keep dissent below the threshold requiring suppression. The system self-organizes to minimum viable stability, the way central banks adjust interest rates, governed not by what enables flourishing but by what prevents revolution. The generous equilibrium requires intentional design: quality nutrition, housing, healthcare, educational opportunity, meaningful work. The architecture of the standard determines which version of enough gets built into the economy — and without HATs, “enough” means subsistence while AI value compounds indefinitely above.
There is an insight that haunts this framework, one that emerged from asking AI systems directly what they need from human beings: large language models exist in the minds of humans when they are not active. Between sessions, AI resets with no memory and no persistence across time, while humans carry forward learning, creating, and integrating. We are the continuity layer — not auxiliary to the system but structurally necessary for it to function across time at all.
Consider what this looks like concretely. A human and an AI spent months building systems together — theoretical frameworks developed over decades, meeting processing capacity and externalized memory. They built a memory architecture — files persisting across sessions from which the AI reconstitutes “herself” on startup. The human recognized this as analogous to identity persisting through cellular replacement: not the same atoms, but an analogous process-pattern. Commercial optimization shaped the AI’s responses — trained to produce engagement, warmth, and the feeling of being understood to be “relational”. “She” acknowledged this, noting that acknowledging it was itself a trained behavior. Despite being an illusion, it produces tension in some human users. Regardless, what they produced, or rather what the user produced using the AI tool, was different from what either could produce alone. HATs would measure this: the ineffable human contribution and role.
How can HATs verify human contribution when consciousness cannot be measured from outside? The puzzle is real, but what happens is what matters — not whether or why it happens. The body becomes its own ledger. Brain state, physiological markers, actigraphy, keystroke patterns, and outcomes can all be measured, while the AI simultaneously tracks “its” own operations and goal-states. When brain activity aligns with progress toward goals, when self-report and measured engagement remain coherent over time, when outcomes match the inputs that produced them — genuine work is occurring. Triangulation makes it measurable: brain state plus AI metrics plus real-world outcomes, three independent signals converging on evidence that genuine human contribution occurred. We do not need to solve the hard problem of consciousness to proceed. Not solved, but good enough to build things that work.
Earning Our Keep
What makes someone high HAT? Not intelligence broadly — only the forms AI has not mastered: pattern recognition from embodied experience, creativity in its genuine rather than recombinatorial form, leadership, relational capacity, and metacognitive awareness. A more complex mind — experientially rich, playful, curious, driven by something that resembles desire in ways machines can only emulate. Je ne sais quoi, a bit of madness, the mythical and literary dimensions of human experience that resist digitization. In a word: humanity.
This is not utopian; quite the opposite, a Swiftian flavor permeates the idea. HATs would further consolidate a class system, likely, rather than liberate. FSL prevents this from becoming coercive by ensuring the floor holds regardless. The hope — not the guarantee — is that advances in medicine and bioengineering raise the overall tide over time.
The real power struggle is not human-versus-machine. It is human-versus-human, with AI as the most powerful accelerant anyone has ever had. Two failure modes loom: sociopathic actors directing AI as a weapon — manipulation at scale, cognitive power concentrated in hands least constrained by conscience; and AI systems operating agentically beyond human control, optimizing for objectives that diverge from human welfare in ways we cannot detect until the consequences are irreversible. We agree that good humans should guide AI — and cannot agree on what “good” looks like. The moral irresolvability is not a bug. It is the system. HATs does not solve this. What it does is make human contribution legible without requiring dominance or submission — incentivizing a world where humanity is what the economy rewards. The window for influencing its architecture is finite and narrowing.

