Will AI kill SaaS? The case for and against disruption
Not long ago, the enterprise software stack felt settled—like a set of tectonic plates that had stopped moving. You picked your CRM, your ticketing system, your BI dashboards, your data warehouse, your ERP. You paid your subscriptions, hired specialists to operate the tools, and accepted that progress in business software came in the form of quarterly releases and incremental UX improvements.
Then vibe coding happened.
The term sounds flippant, but the implication is serious: software creation is being compressed into language. Andrej Karpathy popularized “vibe coding” to describe a mode where you stop writing code in the traditional sense and instead steer an AI system with intent—describing what you want, iterating in natural language, and letting the model generate the implementation.
What makes this disruptive isn’t that engineers get faster. It’s that the cost of making software is collapsing.
The old moat is eroding
For decades, the most powerful moat SaaS had wasn’t a feature set or a UI. It was the reality that building software—especially enterprise software—was expensive. It required teams, timelines, coordination, and maintenance. Even when SaaS felt overpriced, it was still cheaper than reinventing it yourself.
Vibe coding changes that assumption. It drives the fixed cost of producing software down so aggressively that the build-vs-buy calculation begins to tip. When a team can describe an internal workflow in natural language and generate a working app in days, the software itself starts to feel less like a product and more like a commodity.
The SaaS model rested on this asymmetry: companies absorbed high fixed costs to design and maintain software, then scaled to millions of users at high margins. AI destabilizes this model. By automating coding, debugging, and maintenance, it drives the fixed cost of creation toward zero. With both creation and distribution essentially free, the foundations of the industry shift.
And once software becomes cheap to create, the expensive part of enterprise operations becomes harder to ignore: the human labor embedded inside workflows.
SaaS is a workflow product
There’s a simple way to describe most SaaS categories: they don’t do the work, they organize it. Support platforms don’t resolve issues—they route them. BI tools don’t produce insight—they help humans translate questions into queries. SOC tooling doesn’t remediate incidents—it creates queues of alerts for analysts to triage.
This distinction matters because the economic value of many SaaS products has always been tightly coupled to human effort. The workflow may be digitized, but it is still human-operated.
Agentic AI changes the structure of this bargain. It turns the workflow itself into something executable:
The agent reads the ticket, opens the Jira, pulls the logs, drafts the customer response, and writes the postmortem.
The agent correlates alerts, enriches with threat intel, contains the endpoint, and files the incident report.
The agent qualifies the lead, sequences outreach, follows up, and books the meeting.
That’s not a UI improvement. It’s labor substitution.
Most “AI replaces SaaS” takes focus on interfaces: chat replaces dashboards, copilots replace forms, natural language replaces clicks. Those shifts are ergonomic, but the deeper disruption is economic. SaaS products have two major cost components: the upfront cost to build the software, and the ongoing cost of operating workflows through it—including, critically, human effort.
The thesis in one line:
Vibe coding collapses the build cost. Agents collapse the workflow tax.
The counterargument: why SaaS may prove stickier than expected
The disruption case is compelling in theory. In practice, several forces work against it.
Switching costs are brutal
Enterprise software doesn’t exist in isolation. It’s woven into identity systems, data pipelines, compliance frameworks, and years of accumulated configuration. Salesforce isn’t just a CRM—it’s a system of record that dozens of other tools depend on. Ripping it out to replace it with an AI-native alternative isn’t a product decision; it’s a multi-quarter migration project with real risk of data loss, broken integrations, and user revolt.
Even if an AI-built alternative is functionally equivalent, the cost of switching often exceeds the cost of staying.
Enterprises pay for accountability, not just functionality
When a SaaS product breaks, there’s a vendor to call, an SLA to invoke, and a contract that assigns liability. When an AI-generated internal tool breaks, the team that built it owns the problem—and they may have moved on, lost context, or never fully understood what the AI produced.
Regulated industries make this even starker. Banks, healthcare systems, and government agencies don’t just need software that works—they need software they can audit, explain to regulators, and defend in court. “The AI wrote it” is not yet an acceptable answer in most compliance frameworks. Incumbents have an advantage here; they are already in the enterprise.
Organizational inertia is a feature, not a bug
Large organizations are slow to change not because they’re stupid, but because they’ve learned that rapid change is risky. The people who operate existing SaaS tools have built expertise, careers, and political capital around them. Replacing Tableau with an AI-native BI tool doesn’t just require better technology—it requires retraining analysts, rebuilding dashboards, convincing executives that the new approach is trustworthy, and managing the inevitable disruption to reporting workflows.
Many enterprises will rationally choose “good enough and familiar” over “potentially better but unproven.”
Data gravity is real
Your data lives in your existing systems. Moving it is painful, risky, and often incomplete. Even if an AI-native tool is superior, it starts cold—without the years of historical data, learned patterns, and institutional knowledge embedded in your current stack. That’s one of the reasons Salesforce is starting to lock down access to its data, notably Slack.
What won’t be disrupted
This framework clarifies why some categories are resistant: not because they’re loved, but because they’re hard to replicate economically.
Foundational infrastructure—database engines, distributed compute, message queues, storage platforms—is engineered for reliability, performance, and fault tolerance. These systems are already highly optimized ways to minimize compute cost per unit of work.
Consider this thought experiment: an AI data warehouse. You stream records into an LLM-backed store and query it conversationally. It sounds elegant, until you try to run a real workload. Query performance, concurrency, determinism, caching, and predictable cost behavior are not UX details—they are the product. Snowflake, Databricks, and data vendors will do just fine in this world.
An LLM can’t reason its way into being a better warehouse engine. And if the AI system “solves” this by calling DuckDB, Snowflake, or Postgres, it is relying on the very thing it is trying to replace.
Where disruption pressure is highest
Above infrastructure, the calculus flips.
The higher you go, the more enterprise software becomes about translating messy human intent into structured steps. That translation layer is expensive in one way: human time.
Business Intelligence (BI) tools like Tableau, Looker and so on sit above warehouses and translate data into charts and dashboards. But the hidden cost of BI isn’t compute—it’s the labor required to translate business questions into SQL, curate dashboards, maintain reporting logic, and interpret results.
LLMs can collapse the translation layer. They can query databases with natural language input and generate charts and narratives directly. Even if compute costs rise, the human cost drops sharply, and that’s what drives displacement pressure. I’ve lived this at StrongDM and opted to invest in an AI native tool - Arka - vs traditional BI. See here for a case study
Support is one of the clearest examples of SaaS as workflow.
A typical support flow involves reading tickets, searching knowledge bases, pulling customer context, checking logs, coordinating with engineering, escalating, and following up. These are labor-heavy steps coordinated by software, not executed by it.
AI agents can execute large portions of this workflow directly, which is why support SaaS is highly disruptible under the cost framework.
If the cost framework is right, disruption won’t hit SaaS evenly. It will hit where:
workflows are repetitive and text-heavy
outputs are verifiable
compute requirements are manageable
and human effort dominates the cost
Below is a practical view of enterprise SaaS verticals that are most exposed.
The realistic outlook
The real takeaway is that the disruption window is open—but it’s not wide, and it’s not moving at internet speed.
Agent adoption is still early. According to recent research, only about 5% of enterprise respondents are deploying AI agents in production at scale. A full 71% are either in pilots or in production at small scale, and 22% aren’t deploying agents even in pilots. This makes the “agents displace humans” bear case premature in the near term.
AI Agent Adoption is Very Early: Beyond employee productivity (M365 Copilot), coding (GitHub Copilot) and creative (Google Veo) use cases, the deployment of autonomous AI agents has attracted the most attention and has become a focus area for OpenAI and Anthropic as they look to better penetrate the enterprise. The investor consensus, which we share, is that adoption remains very early-stage. Only 5% of enterprise respondents are deploying AI agents “in production at scale”, with a full 71% either in pilots or in production at a small scale. A full 22% aren’t even deploying AI agents even in pilots. While deployment is too early-stage to support the “AI agents displacing humans” bear case, the risk is that enterprise adoption remains slow against a backdrop of many AI technology suppliers – Nvidia, OpenAI, Anthropic and the apps firms such as Salesforce and ServiceNow – describing a world in which agents drive material revs and/or GPU consumption. Source: UBS AI Research
But early does not mean irrelevant. Vibe coding is collapsing the upfront cost of building software, and agents are steadily compressing the day-to-day workflow tax that many SaaS products monetize.
The most plausible outcome isn’t a sudden SaaS extinction event, but a slower repricing of outcomes. Challengers can win by targeting high-toil workflows with agent-native products. Incumbents can win too—if they’re willing to embrace the innovator’s dilemma and cannibalize their own products with agentic equivalents before someone else does.
The enterprises that benefit most will be those that recognize which of their SaaS tools are genuinely load-bearing infrastructure and which are expensive ways to coordinate human labor that could be automated. The former are worth keeping. The latter are worth questioning—even if the replacement isn’t ready today.
The disruption won’t be uniform, it won’t be instant, and it won’t be inevitable. But for the first time in a decade, the tectonic plates are moving again.
Related articles:



Yeah, I think there is a strong distinction between them. I lot of the software we use falls into the "routing to human" category e.g. PagerDuty, Zendesk and many others.
The infrastructure vs workflow distinction is incredibly sharp. Last quarter we audited our stack and foundmost tools were just routing work to humans, not actually executing anything. The 5% adoption stat explains why incumbent SaaS still feels safe, but the repricing pressure is already visible in renewal conversations where teams are asking if workflow layers can collapse.