The age of the human bottleneck is ending. The age of human intent is beginning
Prologue
It might not come as a surprise that I use GenAI to both help me research and edit my content. My usage of GenAI products, at least in relation my posts has been editorial and research versus strict content creation. This article is different. It has been ideated and written entirely by GenAI. My motivation in using various GenAI models to create this article wasn’t because they can write better than me, which they absolutely do. It was my attempt to use them to project a future world, one in which I am trying to comprehend and predict.
I gave the following prompt to Gemini, ChatGPT and Claude
You are a futurist technologist. You think about innovation waves, like AI, and how they might shape and change society. You are now thinking about how AI might impact software organizations of the future. I want you to start by benchmarking today’s software organizations: companies who build software products. How are they staffed and organized. I then want you to project how AI might change that. Be bold, but pragmatic and whenever possible tethered into reality. Try and get examples from current AI-native companies and project forwards into the future.
Each model generated its own content, which I then edited in various modes. First, I challenged some of the assumptions, asking questions, challenging their argument, pushing them to be bolder, yet realistic (although to be honest, is anything unrealistic with the advent of AI now?). I also asked each model to review and edit the content created by the other models. I went back and forth for a few days between probing each model, tossing one model’s content to another to critique it, until I - or more appropriately we - settled on the below
The year is 2035. Eliza stands in what used to be called The Stack—fifty engineers, bright monitors, the low thrum of a hundred keyboards clattering out React components. It had taken them six months to ship version one.
Today, the room has a new name: The Engine Room. And it’s silent.
Eliza isn’t an engineer anymore. She’s a Chief Intent Officer. And there are only two other people in the room.
But to understand where we’re going, you need to see what’s dying right now.
The age of the human bottleneck
“It feels like a superpower,” says an engineer at a seven-person startup in 2024. He doesn’t know it yet, but he’s witnessing the beginning of the end—not of software development, but of software developers as we’ve known them for forty years.
A traditional 2020 software company was predictable: 50 people, Series A funded. CEO, CTO, five engineers, three product managers, two designers, four marketing people, three salespeople, two customer success reps. The math was fixed: one revenue-generating employee for every $200K in ARR. A typical startup burned $1 million to reach $1 million in revenue.
The process was equally predictable. The Product Manager would spend weeks writing user stories. The UX Designer would spend another week mapping pixels. Then came ten engineers—what Eliza would later call “the Scribes of the System”—painstakingly translating English and Figma into thousands of lines of code.
“We were glorified typists,” Eliza would later say. “We were paid to translate the same five design patterns over and over.”
Then came the Cycle of Hand-Offs: two days for code review, a week for QA testing, a day for DevOps deployment. It was a beautiful, bloated bureaucracy built entirely around managing human error and communication lag.
It took six weeks to ship a button.
Until 2024, when the law broke.
AI-native startups today generate $3.48 million in revenue per employee—six times higher than traditional SaaS companies. The numbers are almost obscene:
Gamma: tens of millions in ARR from 50 million users with 28 employees. “If we were part of the previous generation, we would easily have around 200 employees,” says co-founder Grant Lee.
Cursor: $100M ARR in 21 months with 20 people.
Bolt: $20M ARR in two months with 15 people.
And here’s the part that breaks everyone’s brain: they’re intentionally capping their size. Runway Financial will stop hiring at 100 employees. Agency plans the same. The growth-at-all-costs mentality—the defining characteristic of startup culture for thirty years—is being abandoned.
Not because they’re failing. Because they’re winning so efficiently they don’t need more people.
Eliza remembered the moment she knew the old world was doomed. It was 2026, and she was testing Project Chimera—the first system built entirely by an Autonomous Agent Pipeline.
She spent an afternoon writing a five-page specification: Increase widget customization rates by 25% by allowing users to upload custom textures. Must maintain latency under 100ms, cost less than $100 monthly.
She hit GO.
The system generated an Architectural Proposal—fully documented backend services, frontend components, optimized database schema, plus a Risk Assessment that flagged three security vulnerabilities before the code was written.
She handed it to Damon, the sole Agentic Architect. He didn’t write a single line of application code. He spent his day tuning agent parameters—adjusting the Code Generation Agent’s language bias toward Rust, tightening the Testing Agent’s input validation focus. Within 72 hours, Chimera was in production. Fully tested, fully documented, self-healing.
The old guard engineers watched in horror. By 2027, it wasn’t a fluke. It was the new standard.
By 2030, a new software company starts with what Eliza calls the Nuclear Core:
Eliza: The Intent Officer (formerly “CEO/Product”): Doesn’t manage features; manages simulated futures. Focuses solely on defining high-leverage problems AI should solve next. Writes strategic intents: “Increase conversion by 15% while reducing support tickets by 20%.” The AI figures out how.
Damon: The Agentic Architect (formerly “CTO”): Rarely looks at generated code but obsessively monitors Agent Telemetry—the metrics of the system that builds the system. Doesn’t write code; conducts an AI symphony. One developer called it “God mode”—able to conjure code at will. The Architect lives there permanently.
Maria: The Quality Sentinel (formerly “Head of Engineering/QA”): Final human oversight. Ensures output is secure, compliant, adheres to ethical guardrails. Doesn’t test code—tests the systems that test code.
The Supporting Cast is all AI: coding agents, customer support agents, auto-generated documentation, 24/7 QA testing, self-healing DevOps, marketing content generation, financial modeling.
By 2035, when Eliza stands in the silent Engine Room, this isn’t revolutionary. It’s just how software works.
Gone is the old way: PM writes epic (2 weeks). Designer creates mockups (1 week). Ten engineers translate to code (3 weeks). QA tests (1 week). DevOps deploys (1 week). Total: 8 weeks minimum.
Brainstorm in the morning, prototype by midday, polished product by night. This is the new normal.
The silence is perfect execution
Walk into a software company in 2035 and the first thing you notice is the silence. Not because nothing’s happening—because so much is happening seamlessly.
Eliza’s display shows Agent Telemetry: thousands of micro-decisions per second, code being generated and tested and deployed, user feedback analyzed and incorporated, A/B tests running in parallel universes of possibility.
The three humans aren’t coding. They’re thinking. They’re defining the next strategic intent. They’re asking questions the machines can’t:
Should we enter healthcare? Is this growth sustainable? What problem are we really solving?
These are human questions requiring judgment that can’t be automated—not yet, maybe not ever.
The office is quiet because the age of human labor in software is over. The age of Human Intent—amplified by the machine—has begun.
Three people. A codebase ten times larger than fifty people managed a decade ago. Output measured not in closed tickets, but in validated strategic outcomes.
On the wall: three simultaneous development cycles—a security patch, a UI/UX experiment, a market expansion. No humans involved in execution. Just the Nuclear Core, defining what should happen.
Then watching it happen.
If you’re building a software company in late 2024, you’re living through the transition. The Age of the Human Bottleneck is ending. The Age of Human Intent is beginning.
You can be Eliza, learning to define strategic outcomes rather than manage features. You can be Damon, learning to conduct the AI symphony rather than play each instrument. You can be Maria, learning to audit systems rather than test code.
Or you can be one of the Scribes, still typing away, convinced this is hype, that humans will always be needed to translate business logic into code.
The Engine Room is being built right now. The silence is coming.
The only question left is: will you be one of the three people in the room?
Or will you be the reason the room is silent?
Epilogue
When I first started to work on this article with the aforementioned models 3 weeks ago, I had no intuition as to what they would generate. Somewhat surprising to me, all three painting a picture that is not to different that Eliza’s world in this article. All three agreed that today’s world of software development will radically change.
The question I ponder is this one: Will the change represented here materialize?
And the answer, at least from my own experience using and observing GenAI being used in software development, is yes. There’s an separate question on how satisfying this mode of work is relative to how software is built today. It’s akin to programming a loom versus hand-weaving a rug. The art of the craft dies and is replaced by (cold) automation.
Todays models can almost certainly build - almost autonomously - a wide class of software applications. These might be simple: a frontend, an API middle-tier and a Postgres database. But that’s usually how disruption starts, from the bottom and moving upstream. Today the simple three-tier web/mobile application is disrupted, tomorrow the enterprise SaaS product.
I suppose we will know when we have approached this moment when Anthropic, OpenAI and the other labs stop hiring software developers altogether.
We have time. All are still hiring :)


