top of page
  • Writer: Jens Hoffmann
    Jens Hoffmann
  • Jun 2
  • 8 min read

Updated: Jun 5




POLITICS AND POETICS


AI 2027: WE BUILT MINDS THAT BUILD MINDS

SAMUEL BUTLER

June 2, 2025



What if artificial general intelligence isn’t science fiction, but two years away? This article explores the startling AI 2027 forecast that AGI may arrive by 2027 and quietly reshape life as we know it. The radical claim? That human intelligence itself may soon be outpaced and outsourced. And yet almost no one is talking about what will certainly be the end of the world as we know it.

In April 2025, a sobering and highly detailed forecast began circulating among AI researchers, policymakers, and tech industry leaders. Titled simply “AI 2027,” it wasn’t a bureaucratic policy memo or a media exposé, but an independently developed scenario, created by a small team of researchers led by former OpenAI scholar Daniel Kokotajlo and supported by the AI Futures project and Manifund.

 

This forecast quickly caught fire. Unlike speculative think pieces or generalized hype, it offered concrete timelines, detailed capability projections, and a clear warning: Artificial general intelligence (AGI) may plausibly arrive by 2027. Not in the distant, sci-fi sense, not “eventually,” but within two years of the report’s release.

 

Before going further, it’s worth defining the terms. Artificial intelligence (AI) refers to systems that can do things we generally associate with intelligence: summarize texts, recommend movies, translate languages, beat you at chess. But it does so narrowly, by training on oceans of data to complete highly specific tasks. Artificial general intelligence (AGI), by contrast, would be capable of learning anything a human can learn, reasoning across domains, and figuring out new problems we haven’t yet invented. If AI is a particularly smart calculator, AGI is a student who can become a physicist, novelist, or chess grandmaster—depending on the time of day.


AI doesn’t need bodies to seduce us—it just needs to listen

 

The report described how existing AI systems are already transitioning from narrow tools to increasingly autonomous agents. It predicted that we would soon see AI systems capable of making decisions independently, learning continuously, coordinating across domains, and recursively improving themselves—hallmarks of true general intelligence. The conclusion was stark: We may already be witnessing the run-up to an intelligence explosion.

 

What this looks like in practice is not a sudden leap into sentience, but a quiet acceleration of competence. In biotech labs, AI proposes protein folding solutions that have eluded researchers for decades. In supply-chain management, it solves logistical knots in seconds. In software engineering, it writes, debugs, and deploys at a pace no team of humans can match. The future, in this sense, is self-reinforcing. AI is now a partner in the development of AI.

 

And yet, for all the document’s technical detail and foresight, it leaves one profound question to the reader: What happens to human life when most forms of human labor—and perhaps creativity—are no longer necessary? 


Even good AI needs a kill switch

 

The forecast did not suggest that one day a single model would declare itself our overlord. Rather, it described something quieter and weirder: a cascade of upgrades, improvements, and refinements, until the tools we use begin doing things we never taught them to do. The transition from “tools” to “agents” will, it predicted, feel sudden in hindsight. One day you realize the chatbot that schedules your appointments now edits your writing, drafts legal contracts, and knows how you’re likely to respond emotionally to a tax audit.

 

In this world, AGI systems outcompete humans in virtually every cognitive domain: writing, analyzing, managing, designing, creating. Law firms, media companies, logistics networks—whole industries could, at least in theory, be run by intelligent systems. Work that currently requires human intuition, emotion, and empathy—like teaching, counseling, even caregiving—could be simulated with alarming fidelity.

 

Efficiency, in other words, becomes a solved problem. 


The original robot gone rogue—before AI got smart, it got vengeful

 

And yet, the more we delegate to machines, the more we’re forced to ask: What remains that only humans can offer?

 

AGI systems of the near future will be able to convincingly simulate emotional intelligence. They’ll remember your preferences, track your moods, and mirror empathy in real time. Personalized AGI companions could serve as therapists, teachers, friends, or romantic partners. Their consistency, recall, and emotional tuning will exceed any human’s. They will not forget your birthday. They will not ghost you. They will not, unless asked, challenge your worldview.

 

But here lies a foundational truth: Simulation is not authenticity.

 

The truth is, we’ve never encountered anything like this. Not the printing press, not the steam engine, not even the internet compares. Those changed industries. This changes reality. If AGI unfolds as forecasted, we are dealing not simply with better tools, but with the end of tool-ness itself. Intelligence, until now, was our defining feature: the one thing we had over animals, over nature, over the machines we built. To cede that—to share it, or, worse, be outpaced by it—is to alter our place in the world in a way that no economic metric or headline can quite capture. This isn’t about automation. It’s about ontology. About who we are, once we are no longer the smartest thing on the planet.


Replicants asked questions long before ChatGPT

 

It’s worth asking why more people aren’t talking about this, given what is to come. Not in policy circles or research labs—there, the conversation is urgent—but in the wider culture, where the topic hovers just outside the frame. Part of the answer is overload. After the crypto frenzy, the metaverse detour, and a pandemic’s worth of digital exhaustion, many have grown numb to tech narratives. But deeper still is the problem of abstraction. AGI doesn’t arrive with a new gadget. It arrives invisibly, in backend systems and interface upgrades, in models with names like Gemini, Claude, or ChatGPT. It doesn’t announce itself. It just makes things smoother, faster, eerily easier.

 

By the time you realize it’s everywhere, it already is. And because it hasn’t yet exploded, we assume it won’t. Add to that the daily grind of bad news. Wars, elections, inflation, conspiracy theories, melting glaciers—our attention is already maxed out. In such a saturated atmosphere, AGI feels both too abstract and too immense to metabolize. It doesn’t come with a crisis headline. It doesn’t storm the gates. Instead, it seeps into everything, quietly displacing the old without offering a clear moment of reckoning. The result is a kind of ambient denial. We know something big is happening, but we keep acting as if it isn’t—because there are more urgent, more visible fires to put out.

 

We value human interaction not for its polish but for its mutual vulnerability. When a friend comforts us, it matters because they, too, are breakable. When a teacher believes in us, it’s meaningful because they could have spent their energy elsewhere.

 

AGI cannot choose to care. It can only model caring, like a mirror simulating light.

 

And, still, many will choose it. As virtual assistants become more attuned and emotionally rich, some will prefer machines out of not desperation, but convenience. Why risk rejection when your AGI partner is a model of attentiveness? Why bother with messy, contradictory humans when your chatbot friend never interrupts and always remembers your favorite band? 


A sleek warning that AI doesn’t want love. It wants freedom.

 

The danger isn’t that we’ll believe it’s real. The danger is that it becomes good enough that we no longer care whether it’s real. It’s the emotional equivalent of ultra-processed food: satisfying in the moment, vaguely disturbing afterward.

 

Already, human interaction—offline, unscripted, inconvenient—is becoming a scarce resource. And as scarcity tends to do, this may increase its value. Live performance, face-to-face conversation, and shared silence may become luxury goods. Rituals dismissed as outdated—communal meals, communal mourning—may regain a cultural centrality that no algorithm can fake.

 

What, then, becomes of work? If AGI systems can automate code, legal advice, customer service, technical writing, design, research, and even some kinds of care—what is left for us to do?

 

Three categories might endure. First: embodied work. Jobs that require physical presence and touch—nursing, massage, performance. Second: relational work. Roles where authenticity, trust, and vulnerability matter—therapy, teaching, spiritual leadership. Third: creative constraint. Human art shaped by imperfection, surprise, personal story—things AGI can imitate but not originate.

 

If cognition becomes cheap, relation becomes priceless.

 

Strangely enough, the more we live with AGI, the more we may try to reclaim a version of the human experience we’ve spent a century trying to outsource. Expect to see AGI-free cafés, human-only schools, analog-only dating clubs. Not out of resistance to progress, but out of reverence for what progress can’t replace.


What if AI didn’t destroy the world—but redesigned it, pixel by pixel, and we didn’t notice?

 

The more we spend time with perfect machines, the more we may crave the unpredictable magic of actual people: their contradictions, their awkward jokes, their late-night rambling phone calls. Their ability to disappoint and still be loved.

 

In this world, wealth itself might be redefined. Not capital, but capacity. Not efficiency, but intimacy. What’s valuable might become who you can rely on, who shows up, who forgives you. Not who optimizes your workflow. Because as AGI approaches its long-awaited moment, the frontier will no longer be cognition. It will be connection.

 

Machines will run the economy, govern traffic, write screenplays, and compose love songs. But they will not sit beside you in silence when you’re grieving. They will not grow old. They will not make amends. They will not know what it is to hope and regret and live in the space between.

 

That’s what will remain.

 

And yet, the transition won’t be graceful. We are beginning to see the first fractures in our social contract. In schools, teachers quietly compete with AI tutors that can generate lessons in milliseconds and track cognitive patterns across semesters. In hospitals, diagnostic AIs outperform doctors in accuracy and speed, but not in bedside manner. In offices, entire departments shrink—not through layoffs, but through attrition, as tasks once deemed human start vanishing in silence.

 

And still, the real effects are not only economic. They are metaphysical. If your therapist is an algorithm, your colleagues are mostly synthetic, and your lover a simulation, what anchors you in reality? What does it mean to be seen, when the seeing is optimized, predicted, and priced? One wonders if this is what the philosophers feared when they spoke of disconnection—not just from others, but from the self.


A corporate cautionary tale about building what you can’t control

 

Jean-Paul Sartre famously claimed that hell is other people. Perhaps heaven is the synthetic companion who never interrupts. But if hell is crowded with need and friction, it is also where ethics begin. To live among others is to experience delay, disagreement, care. It is to confront the unpredictable.

 

AGI promises a world of infinite responsiveness. But it might also starve us of resistance, which is the soil of growth. Our rough edges aren’t accidents—they’re where we catch each other. If we sand them all down, if every friction is resolved by design, what remains may not be peace, but numbness.

 

And perhaps, once we stop looking at AGI as the final answer to every human problem, we’ll realize: Its greatest gift may be forcing us to remember what was irreplaceable all along.

 

 

Samuel Butler (b. 2008 in Henderson, Nebraska) studied computational epistemology at MIT but dropped out the day before submitting his thesis on “Computing Machinery and Intelligence.” Deeply skeptical of artificial intelligence yet acutely aware of its potential, he walks the uneasy line between critic and strategist. In 2021 he cofounded the Global Intelligence Foundation, a Lausanne-based think tank funded by the Varela Trust for Emergent Technologies, where he now serves as director.


Cover image: HAL 9000: calm, logical, and terrifying proof that the real threat isn’t AI emotions—it’s their absence

Comentarios


selavy-logo.png

SIGN UP TO RECEIVE UPDATES ON NEW POSTINGS FROM SÉLAVY

EMAIL ADDRESS:

THANKS FOR SIGNING UP!

bottom of page