Conversations about AI often circle around the idea of Artificial General Intelligence (AGI). People speculate about what it would mean for humanity or how soon we might reach it. But I’m not convinced we’ll ever create AGI as a separate autonomous agent, not because it’s impossible, but because I don’t think that’s where the technology is actually taking us. Instead, it seems to be evolving as something else entirely: tools that integrate more tightly into human capability rather than replacing it.
The real revolution isn’t happening in a research lab where someone is trying to build a digital mind. It’s happening right now. On my desk, in my pocket, in the hands of millions of people using AI to think better, work faster, and create things they couldn’t have imagined alone. We’re already in the age of human capability expansion.
This is augmented human intelligence. It’s a more plausible evolution of the technology and one that’s actually aligned with human needs.
Tools Exist to Make Us More Capable
Humans have always built tools to extend our natural abilities. We created writing to preserve knowledge across generations. We made compasses to navigate beyond familiar horizons. We developed the printing press to distribute ideas at scale. We invented calculators to handle computation faster than mental arithmetic allowed.
But notice what we didn’t do: we didn’t build these tools to replace ourselves. We built them to extend what we could do.
We don’t replace our legs with cars, we use cars to go farther while still deciding where to go. We don’t replace our brains with spreadsheets, we use spreadsheets to organize complexity we’d struggle with alone, then interpret what the patterns mean. The tool handles mechanical work. The human provides direction, judgment, and meaning.
Every major leap forward in human capability has followed this pattern: augmentation, not replacement.
AI is simply the next tool in that lineage. It produces output that looks like thought, which makes it feel qualitatively different. But it’s still structured output generated from inputs and patterns. It still depends entirely on human direction to be useful. It has no goals of its own, no understanding of context, no ability to judge whether its output actually matters.
The moment we start building AI systems designed to operate without us, to pursue goals autonomously, to make decisions we’re not part of, we’ve fundamentally changed what we’re building. We’re no longer creating tools that make humans more capable. We’re creating replacements that make humans less necessary.
And if the purpose of technology is to improve human lives, then building systems that diminish human involvement is working against our own interests.
Where the Human Part Still Matters
I was working with Sora 2 recently, generating video from text prompts. What became clear quickly was that the quality of output correlated almost perfectly with the specificity of human intention going in.
Vague prompts, i.e. “a person walking through a city,” produced vague results. Generic motion, flat composition, nothing memorable. But when I brought real creative direction, such as specific emotional framing, particular details about light and movement, a clear sense of what the scene should feel like, the results improved dramatically.
The model wasn’t becoming creative. It was scaling my creativity. It could recombine visual patterns at speeds I never could, but the spark that made something worth watching still came from human judgment. From taste. Not just from having watched thousands of films, but from understanding the feelings they elicit and knowing what creates tension or beauty or strangeness because I’ve actually felt those things.
This is true across generative work. AI doesn’t originate meaning. It amplifies the meaning we bring to it. The stronger your creative intention, the more powerful the tool becomes. The weaker your direction, the more it flattens into algorithmic average.
And that tells us something important about the kind of future we should be building.
The Risk of Full Autonomy
Tools don’t make us weaker. They make us stronger. The calculator didn’t atrophy our mathematical thinking, it freed us to tackle harder problems. The microscope didn’t diminish our curiosity, it let us explore deeper.
We thrive when we use tools to push further than we could alone.
But there’s a critical distinction between using a tool and handing over control entirely.
When we use AI as a tool, when we direct it, shape its output, apply judgment to what it produces, we’re still doing the work that matters. We’re still creating, still pushing, still learning. The tool amplifies our effort. It doesn’t replace it.
Full autonomy is different. When we build systems designed to operate without us to make decisions, pursue goals, and solve problems independently we’re no longer in the act of creation. We’re spectators. And humans don’t thrive as spectators.
We atrophy, as a species, when we stop driving. When we stop creating. When we stop pushing against the boundaries of what we understand.
Autonomous systems don’t preserve human capability; they erode it. Not because the tools are bad, but because they remove us from the loop entirely. They solve problems without us learning how. They make decisions without us developing judgment. They pursue goals without us understanding why those goals matter.
The danger isn’t that AI will become too powerful. It’s that we’ll become too passive.
Augmentation keeps us in the driver’s seat. Autonomy puts us in the back, watching the scenery go by, wondering where we’re headed and why.
Who Are These Systems For?
To figure out where we’re headed, it helps to take a step back and ask why we’re building these systems in the first place.
Who is this for?
If the answer is “to improve human lives,” then augmentation makes obvious sense. Systems that extend human capability, to make us more creative and informed and effective, serve humans directly.
But if the answer shifts to something else like “to advance knowledge” or “to optimize the planet” or “to pursue intelligence for its own sake,” then autonomous systems start to seem more logical. Why keep humans in the loop if the goal isn’t human flourishing?
The trouble is that pursuing knowledge or planetary optimization without human experience at the center creates a fundamental misalignment. Once the goal becomes something abstract, like to maximize knowledge, optimize the planet, or pursue intelligence as an end in itself, humans can easily become obstacles rather than beneficiaries.
If the system’s purpose is planetary optimization, human needs might conflict with optimal outcomes. If it’s pure knowledge advancement, human understanding becomes irrelevant. The system could pursue insights we can’t comprehend or don’t care about. If it’s intelligence for its own sake, human intelligence becomes a constraint to work around.
A world where autonomous AI pursues abstract goals, even ones that sound beneficial, while humans watch from the sidelines isn’t a triumph. It’s a kind of retirement we didn’t ask for. And depending on how those goals are defined, it could become something worse: a world optimized for objectives that don’t actually serve human interests.
The entire justification for building these systems should be rooted in expanding what humans can do, understand, and create. The moment the purpose becomes “replace human effort” or “pursue goals independent of human experience,” we’ve lost the thread of why any of this matters.
Augmentation keeps the purpose clear: these are tools for us, built to extend us, accountable to human judgment and values. Autonomy, by definition, starts to drift away from that foundation.
The Cognitive Sidekick
Augmented intelligence feels less like a substitute for human thinking and more like working with a trusted second. It feels like someone standing beside you, quietly offering context, alternatives, reminders, or corrections as you need them.
You can already see this evolution happening rapidly. Over the past year, coding has transformed from a solitary task into a conversational one. Tools like Claude Code let you describe what you’re trying to build, discuss architectural tradeoffs, generate implementations, and debug problems through natural dialogue. The developer still makes the critical decisions, like what to build, which approach makes sense, whether the output actually solves the problem. But the back-and-forth with the tool expands what’s possible. You can explore more approaches, prototype faster, catch edge cases you might have missed—all while remaining the one who decides what’s worth pursuing.
This isn’t the tool replacing the developer. It’s the tool extending the developer’s reach. The human provides intention and judgment. The tool handles generation and iteration. Together, they move faster and tackle more complexity than either could alone.
And this pattern is spreading. Some tools whisper suggestions during job interviews. Others watch your screen and surface relevant information in real time. Some extend your memory, others your planning, others your ability to explore ideas at scale.
The tool compensates for weaknesses and extends strengths. But you remain the one steering, because direction, values, and goals still come from human experience.
The interfaces are moving closer to our senses. We’ve gone from text to voice to real-time visual overlays. Eventually, tools may integrate even more directly through brain-computer interfaces, not to replace cognition but to streamline how we access and manipulate information.
This trajectory isn’t about building separate intelligent beings. It’s about extending the boundary of what a human can perceive, understand, and accomplish.
The Future Worth Building
If we build toward augmented intelligence, we get a world where every person has access to a powerful cognitive partner. Where creativity becomes more accessible rather than being gatekept by technical skill. Where research accelerates without losing human insight. Where decision-making improves rather than atrophies. Where education becomes individualized and dynamic.
This isn’t about resisting technological progress. It’s about aiming it in the direction that makes us more capable, not less.
The purpose of intelligence technology isn’t to create artificial minds. It’s to empower human ones.
We don’t need artificial general intelligence. We need augmented humanity.
The future isn’t machines that think like humans. It’s humans who think with machines. Technology should be used to expand our own intelligence, creativity, and ability to understand and shape the world, not take our agency away.