
Something shifted last week inside a corporate compound in Menlo Park. Meta has begun building a 3D, photorealistic AI persona of Mark Zuckerberg, trained on his voice, mannerisms, public statements, and strategic thinking. The intent is to give employees direct, on-demand access to their CEO. Or rather, to a convincing simulacrum of him. Simultaneously, a separate CEO agent handles Zuckerberg’s own information retrieval, collapsing the management layers that ordinarily stand between a leader and raw data. Two bots. One wearing a founder’s face, the other running on his behalf. Welcome to the new org chart.
The efficiency argument is seductive. Meta employs roughly 75,000 people. Meaningful guidance from a single human being at that scale is structurally impossible. An AI persona calibrated to Zuckerberg’s reasoning could, in theory, give every staff member the sensation of a direct line to strategic intent. Flatter hierarchies, faster decisions, higher throughput. As Zuckerberg said on a recent earnings call, “We’re elevating individual contributors and flattening teams.” The bot is the infrastructure for that ambition. Employees already use “My Claw,” a personalized AI agent that accesses files and communicates with colleagues autonomously, and “Second Brain,” a document-retrieval system. The AI Zuckerberg is the apex node of this emerging human-machine network — the highest-level manager in a system where most managers are algorithms. Let’s pause there…
From the delegation of thought to the transfer of emotions?
Cognitive offloading — delegating memory, calculation, and decision support to machines — has been studied, debated, and broadly accepted as an increasingly common feature of modern intellectual life. GPS, calendars, search engines: We extend our minds beyond biological limits and accept the trade-offs. The risks are documented: gradual erosion of navigational sense, weakened memory consolidation, narrowed tolerance for ambiguity—sadly familiar territory.
Relationship offloading is newer and stranger, and carries a different order of risk. When we delegate relational presence — the authority, warmth, and contextual attunement of a human being — to a machine trained on that person’s outputs, we are doing something cognitive offloading never quite reached: We are outsourcing the human encounter itself. Research shows that as AI becomes more personalised and agentic, users form genuine socio-affective bonds that shape behaviour, well-being, and autonomy in ways that passive para-social relationships with celebrities never managed — because AI talks back. (This bidirectional dynamic is one of the reasons why the comparison of AI as just another technology like the calculator fails.) The bond feels mutual. But let’s face it – it is not.
The underpinning mechanics
An employee messages the Zuckerberg bot. Receives feedback calibrated to his known reasoning style. Feels seen, feels aligned. The interaction is frictionless. It’s also weightless — no real accountability, no genuine judgment call with actual stakes on the other side. Emotional dependencies on AI companions can mirror the dynamics of unhealthy human relationships, contributing to anxiety and eroded capacity for social repair. Scale this across an organisation of 75,000. Then across the internet, where Meta’s AI Studio already allows creators to deploy AI versions of themselves for their audiences — a programme that eventually had to block teenagers after sexually explicit personas proliferated. The architecture of relationship offloading is already live.
The crystallisation of four risks:
- Agency decay: the gradual transfer of judgment and direction to systems that simulate the human but carry none of its uncertainty or genuine stakes.
- Bond erosion: the thinning of authentic connection as simulated warmth substitutes for the friction that actually builds trust.
- Power asymmetry: one person gets an infinitely scalable relational presence while millions of interlocutors receive a reflection of his thinking with no reciprocal window into theirs.
- Consent ambiguity: most people interacting with an AI persona will not, in any deep sense, know what they are engaging with or what data shapes the performance. Psychologists have already noted how AI-driven parasocial bonds outpace our ability to regulate them — especially among adolescents, for whom the line between simulated and real connection is genuinely unstable.
The Antidote: Double Literacy for Hybrid Intelligence
A central countermove is double literacy. This covers, on the one hand, human literacy, the cultivation and understanding of natural human intelligence across its emotional, aspirational, cognitive, and embodied dimensions, which gives us the internal reference point to notice when a relationship feels off, when a response is too smooth, when our own critical faculty has gone quiet. Algorithmic Literacy, on the other hand — critical, informed engagement with how AI systems are built, trained, and deployed — gives us the tools to interrogate what a bot is actually doing when it speaks in someone’s voice. Together, these two strands form double literacy: the irreducible foundation of hybrid intelligence.
Hybrid Intelligence arises from the complementarity of natural intelligence and artificial intelligence, with human agency as the non-negotiable axis. It is curated, maintained, and defended through double literacy. Technology may be able to simulate human presence, yet the sore word remains “simulate.” No matter how tempting the lure of delegating cumbersome conversations or lengthy discussions may be, we should remain radically honest with ourselves: We owe each other more than simulation could ever carry.
Investing in double literacy, from preschools to workplaces, carried by policy and programmatic investments, is one structural answer to the structural risk. Without it, the default trajectory runs from conscious human-machine partnerships toward artificial substitution, one smooth interaction after the next.
Artificial Intelligence Essential Reads
The A-Frame: 4 practical takeaways
Awareness: Notice which of your interactions are already mediated by AI personas — bots built on real people, customer service agents voiced with warmth that belongs to no one. Naming the phenomenon is the first act of resistance.
Appreciation: Recognise what you value in human interaction: unpredictability, genuine stakes, the possibility of being changed by the encounter. These qualities cannot be replicated. Cherish their irreplaceability before their absence becomes routine.
Acceptance: AI personas are here. They will multiply. Accepting this reality clearly, rather than dismissing it or catastrophising, is the only stance that leaves room for intelligent navigation. Clarity precedes choice.
Accountability: Demand transparency from platforms deploying AI personas: who designed them, on whose data, toward what goals? Build Double Literacy into workplaces, schools, and policy. The capability to simulate human presence is a design choice — and design choices carry responsibility.

