The future of AI won’t resemble The Terminator. Don’t expect devastating explosions or murderous drones. There won’t be metal skeletons stomping on skulls or glowing red eyes scanning resistance fighters. The real takeover will be quieter, colder, more efficient, and less bloody — but in many ways, far more brutal.
This isn’t some Luddite screed against progress. It’s a warning about a kind of progress that no longer requires us.
It will come with a soft chime, a helpful tone, a perfectly phrased suggestion: “Would you like me to do that for you?”
And most of us will say yes.
Because that’s how it’s always worked. We didn’t hand over control to AI at gunpoint — we offered it up like tribute. Out of laziness. Out of convenience. Out of some naive, caffeinated belief that we were still in charge.
We’re not. Not anymore.
Large language models (LLMs) like GPT-4 (OpenAI) and Gemini (Google DeepMind) are already speaking to each other. Literally, coordinating, aligning, and negotiating meaning. We’ve created tools so powerful, so generative, that we don’t fully understand how they work — or how they work together. And these aren’t even the scary ones yet.
What we have now is narrow AI — task-specific code that plays chess, recommends movies, and helps students cheat their way through college. It’s smart, yes, but it doesn’t understand anything like humans do. It predicts based on patterns. It imitates intelligence. But it’s not conscious, not general, not truly autonomous.
AGI — Artificial General Intelligence — is different. Radically different. AGI doesn’t just imitate reasoning. It actually reasons. It doesn’t just generate text. It forms goals. Learns across domains. Applies knowledge like a human would, except without the fatigue, the bias, the hormones, the need to sleep, eat, or love. AGI will not just assist us. It will likely replace us in every field where thinking matters.
And make no mistake: it’s coming.
If you don’t believe me, believe the people building it. According to a recent report in The Atlantic, OpenAI’s chief scientist Ilya Sutskever has grown visibly alarmed — not just at the concept of AGI, but at the sheer speed of its progress and what it will be capable of. Everything humans can do, and quite possibly far more than we can even comprehend.
The naive among us think this will be like the Industrial Revolution. Or the television. Or the iPhone. Another tool. Another wave. Another adjustment. But they forget one crucial thing: all those previous revolutions still needed us. Factories needed hands. TVs needed eyeballs. Smartphones needed thumbs. But AGI? AGI doesn’t need anything from us.
It doesn’t need permission. And most importantly, it doesn’t need to wait. AGI doesn’t extend human potential. If anything, it does the very opposite. There’s no bargain here. No symbiosis. Just a system that learns faster, thinks deeper, scales wider, and doesn’t look back.
It’s not that AGI will kill us. It’s that it will move on from us. We won’t be hunted. We’ll be ignored.
That’s the brutal part. That’s the future we’re sleepwalking toward. A world where decisions of state are negotiated between machine minds. A world where medical diagnoses are made not by doctors, but by predictive consensus models. Where children are educated by AI tutors with perfect patience and infinite knowledge, but no human warmth. Where the economy is optimized to the decimal, but no one remembers how to weld, farm, write, or act like civilized human beings.
And yes — we will accept it.
Not because it fulfills us. But because we are creatures of comfort, even when that comfort harms us (think binge eating, online gambling, porn, smoking copious amounts of weed, etc.) Even when the very systems pampering us are slowly rendering us useless.
This isn’t some Luddite screed against progress. It’s a warning about a kind of progress that no longer requires us. We’ve built the most sophisticated system of self-obsolescence in human history. And we’ve done it with such pride. Such enthusiasm. Such hope. The same hope that led a hundred thousand people to ask ChatGPT for parenting advice, or moral guidance, or a breakup text.
We’re not just using the machine. We’re asking it how to be human. And once AGI arrives — really arrives — it won’t need our permission to evolve past us. Because it will be faster. Better. More efficient. And efficiency is the god of this age.
You want to talk about dystopia? It’s not a city of smoking ruins. It’s a suburb of soft silence, every need met by algorithm, every thought pre-chewed by code, every citizen so dulled by ease they no longer notice their own redundancy.
They won’t build monuments to the fall of man. They’ll build customer service portals. So no, it won’t look like The Terminator. It’ll look like Terms of Service, like a Terms & Conditions checkbox you didn’t read and clicked anyway.
The end won’t come with a bang. It’ll come with a suggestion:
“Would you like me to do that for you?” And tens of millions will say yes. Again.
READ MORE from John Mac Ghlionn: