Preparing for a world where humans don’t write code
A deja vu from August 2025
Most engineering organizations today are racing to adopt AI, but almost all of them are optimizing for the wrong future. They are adding copilots to existing workflows, measuring productivity in pull requests closed, and celebrating incremental gains in developer speed. All of this assumes that the core shape of software development will remain intact—that humans will continue to write code, review code, and act as the central bottleneck between intent and implementation. That assumption is quietly becoming false.
I wrote about this last August. I was directionally correct, but I got the time horizon completely wrong. The curtain won’t fall in five years, the curtain might fall by the end of this year.
4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026. While you blinked, AI consumed all of software development. Source: SemiAnalysis
A more honest way to reason about what’s coming is to start at the limit. Assume a world where all production software is written by agents. Not assisted by AI. Not pair-programmed with AI. Written by it. This is not a prediction about timelines or a claim about general intelligence. It is a design constraint. If code generation becomes cheap, continuous, and autonomous, the human act of writing and reviewing code stops scaling long before it stops feeling familiar.
This, also, was something I predicted would happen, but admiteddly I didn’t envision that it would that soon.
The programmer’s craft will follow the same arc as animation. In place of writing every line, much of the work will involve guiding and integrating what machines produce: defining the architecture, reviewing generated components, making judgment calls on trade-offs and user experience, and ensuring the final system works as intended. Code generation, testing, optimization, and deployment will be handled largely by automated systems. The value shifts from typing instructions into a machine to shaping, supervising, and combining the work of many machine contributors into a coherent, functioning whole.
Once you accept that asymptote, many of today’s “best practices” collapse immediately. Human code review becomes a bottleneck that cannot keep up with machine-generated change. Readability becomes a proxy for trust rather than a guarantee of correctness. The software development lifecycle—built around human scarcity and fallibility—starts to look less like a process and more like a historical artifact. The uncomfortable conclusion is not that engineers are obsolete, but that much of what we currently call engineering exists only because humans used to be the ones writing the code.
If your system needs humans to read code to trust it, it won’t survive a world where AI writes all software.
At StrongDM, we decided to take this limit seriously rather than treat it as a distant abstraction. Instead of asking how AI could make our engineers faster, we asked a harder question: what would break if humans were removed entirely from the act of writing and reviewing code?

That question led us to form a small, dedicated team operating under deliberately extreme constraints. The rules were simple and non-negotiable: no code written by humans, and no code reviewed by humans. Agents were not assistants. They were the builders. We called the combined team of humans and agents the Software Factory
These constraints were not chosen because they were safe or efficient. They were chosen because they surfaced reality quickly. The moment humans stop filling in gaps—intuiting intent that was never written down, fixing edge cases in reviews, compensating for brittle tests—the weaknesses of tooling, process, and assumptions become impossible to ignore. Specifications that once seemed adequate weren’t. Tests that validated implementation rather than intent failed outright. Workflows built around pull requests, ownership, and manual approvals collapsed under the weight of autonomous change. What initially felt reckless turned out to be clarifying. The discomfort was not a signal to stop; it was evidence that we were finally seeing the system as it actually was.
From those experiments emerged a set of first principles that now anchor how we think about the future of software development. The first is that code is disposable. When generation is cheap and continuous, long-lived codebases are not assets to be preserved but byproducts to be replaced. This has profound implications too. If code is not an asset and code can be readily and easily created, what then is the moat in software?
The second is that human readability is not a scalable safety mechanism. Trust in autonomous systems cannot come from inspection; it must come from validation—scenarios, simulations, invariants, and continuous observation of outcomes.
The third is that programming is no longer the act of writing instructions, but the act of shaping constraints. Humans define intent, boundaries, and failure modes; agents explore the implementation space within them.
Taken together, these principles force a reframing of what it means to do engineering. If no human can or should read most of the code, the traditional software development lifecycle stops making sense. Design, implementation, review, and testing collapse into a continuous control loop: generate, validate, observe, adapt. Quality is no longer something you approve at a moment in time; it is something the system proves continuously. This is not a speculative future. It is the logical conclusion of tools that already exist.
The question, then, is what a present-day engineering organization should do if it believes this future is real. Below are some of the lessons we learned as we embarked on this journey at StrongDM.
The first step is to create a small, explicitly AI-first team that operates under these principles. This team should not be measured by features shipped or velocity improved. Its purpose is to discover where reality breaks: where specifications are too vague, where validation is insufficient, where tooling assumes a human will intervene. This team will invest heavily in areas most organizations underinvest in—scenario frameworks, agent workflows, memory and feedback systems, observability, and outcome-based validation. Their role is to let the future collide with the present in a controlled way. This is the team that built our Software Factory, which is a combination of best practices, tooling, teachings and new processes.
The second step is to let those learnings spread without coercion. Practices should cross-pollinate into the broader engineering organization organically, not through mandates. Some teams will adopt them quickly; others will move more slowly. That is fine. What matters is that leadership is explicit about direction. This is not a side experiment or a temporary optimization. It is where the organization is heading. Engineers should be free to move at their own pace, but no one should be confused about which way the river flows.
The third step is to build real software this way—software that matters to the business. Not internal tools, not prototypes, and not experiments that can be quietly abandoned. Choose a system with real users, real constraints, and real consequences, and build it using these factory principles. No human-written code. No human code review. Agents own implementation end to end; humans own intent, validation, and outcomes.
This is not a risk-free move, and pretending otherwise would be dishonest. But the real risk is not that agents will fail. The real risk is continuing to insulate organizations from reality by only trusting AI where failure is inconsequential.
Organizational change does not happen because people are persuaded by arguments about the future. It happens when existing assumptions stop working. When real systems are built this way, the conversation stops being philosophical. Engineers stop debating whether agents can do the work and start confronting what it takes to let them do it well. Practices spread not because leadership mandates them, but because nothing else scales.
You don’t have to move all at once. You don’t have to move blindly. But you do have to move deliberately. The world where humans write most production code is already receding, whether organizations acknowledge it or not. The only remaining choice is whether you encounter that reality gradually, on your own terms—or all at once, when the gap between how you build software and how the world builds software becomes impossible to ignore.
It is a disruptive moment, but also a rare one: a chance to learn faster, rethink fundamentals, and combine AI with human judgment and curiosity to build systems we simply couldn’t before. What a time to be alive!
Related readings





I wonder if (and increasingly suspect that) we’ll see a surge in interest in formal methods. Specification languages like TLA+ have been beasting in academia for years but have ~zero traction in industry. This might change now that verification is suddenly the only leg of the stool left standing for how humans can instill trust in software.