Everyone in tech is clamoring about the death of SaaS. And I get it: AI is here to stay, and it’s understandable to question whether or not this seismic shift will completely change what we know as SaaS.
But through all the thought pieces on why SaaS is dying, I have yet to see anyone truly tackle what it takes for tech leaders to navigate it. So let me be direct: SaaS isn't dying. But AI is destabilizing it faster than most leaders want to admit.
I've been in this industry long enough to remember when competitive moats were measured in years. Now, “move fast or die” has gone from the years-long competitive process to mere months. And markets noticed.
In early February, Forbes wrote that in one day, about $300 billion of market value “evaporated” from the SaaS market. Even betting against SaaS has become lucrative, with short sellers making over $20 billion betting against legacy SaaS since the start of 2026.
And it's not just the markets. I recently read a post from a fellow engineer about how he wrote an anomaly detection tool to review cloud costs using Claude. That’s an entire cloud cost observability category wiped out by one guy and a chatbot.
So how do we as SaaS leaders adapt to this new reality? It starts by understanding how the model broke and why our survival instincts are wrong.
How the SaaS Model Broke
In the cloud space, you can see exactly where the instability hits.
When we started Sedai, optimization cycles were slower. Customers evaluated savings over quarters. Infrastructure behavior was relatively predictable. Workloads scaled within guardrails, and engineering teams could keep up.
Over the past year, that changed dramatically.
AI workloads introduced burst patterns we haven’t seen before, GPU costs that spike in hours, & traffic patterns that shift daily. Now, customers expect both savings & zero performance regression simultaneously & instantaneously.
Revenue conversations shifted, too. CFOs now scrutinize marginal compute cost per feature. Engineering leaders are accountable not just for uptime, but for cost per inference.
The volatility is no longer quarterly. It’s real time. And most SaaS leaders are responding to it in the wrong way.
Why SaaS Companies' Survival Instincts Are Wrong
There’s a lot of hand-waving from SaaS leaders who assume they’re insulated from the instability because customers can’t easily rip them out.
But as Jason Lemkin pointed out in his article about the death of SaaS, “If you’re a system of record and you don’t own the AI agents that make your platform thrive… You’ll become the database underneath someone else’s product.”
"Owning the AI layer" isn't bolting on a chatbot or adding AI features to your roadmap. It means engineering autonomy into the parts of your business that AI makes most volatile, and grounding that autonomy in your system of record.
At Sedai, that meant rethinking how we look at infrastructure, pricing, & people. When we center these things around autonomy, we can stop reacting to volatility and start absorbing it.
Pricing Strategy
Pricing is where the old SaaS model breaks first. With traditional SaaS, we saw:
- Seat-based pricing create artificial stability
- Revenue scale with headcount
- Ops teams gradually optimize infrastructure costs
- Margins improve slowly over time
But in the AI era:
- Revenue scales with usage
- Usage directly drives compute
- Compute is volatile
Every AI feature has a marginal cost. Every inference, retrain, & spike in traffic carries real infrastructure consequences. If you don’t understand cost per action in real time, pricing becomes guesswork.
The most dangerous scenario is subtle: You aggressively price AI capabilities to win distribution. You see adoption grow & usage spike, but gross margin erodes quietly beneath you. By the time finance flags it, it’s systemic.
At Sedai, this is where we found autonomous optimization becomes non-negotiable. If revenue is scaling with usage, your infrastructure must adapt at the same speed.
AI-native SaaS companies must design pricing alongside a system that relentlessly optimizes cloud cost & performance in real time. A continuously learning control layer that:
- Detects volatility
- Adjusts resources dynamically
- Validates impact
- Protects margin automatically
When pricing becomes disconnected from your infrastructure’s reality, margin erosion becomes a guarantee.
Infrastructure Efficiency
Infrastructure efficiency is now strategic survival. AI workloads are bursty, nonlinear, retrain unpredictably, & often require GPU resources that cost 10x what traditional compute costs.
When your infrastructure can't absorb that volatility, you see costs spike & performance degrade. The complexity is too high and changes happen too fast for human operators to manage manually anymore.
To become durable, companies must embed autonomous optimization directly into their infrastructure stack.
This doesn’t mean implementing scripts, static automation, or alerts only humans can act on. Instead, it means using systems that continuously observe real behavior, decide on optimizations, test changes safely, & learn from outcomes.
In the AI era, infrastructure efficiency goes beyond cost savings and allows your teams to engineer stability into your product that the market no longer guarantees.
People Optimization & Upskilling
As an engineer myself, the engineers I see fighting AI adoption, or simply moving too slowly on it, are going to become irrelevant.
Not eventually. Soon.
That's not a comfortable thing to say, but I think it's a disservice to pretend otherwise.
With SaaS companies using AI to automate engineering tasks, the instinct is to cut headcount. But that’s the wrong move; tech leaders should see this as an opportunity to upskill the engineering teams, not question the right number of engineers.
The companies that will stay competitive will be the ones that enable their engineers to direct, validate, & accelerate what AI produces.
At Sedai, this was easier than it might be elsewhere: AI adoption is in our engineering DNA. Our team didn't need convincing. But I know that's not the norm.
I've been having a lot of conversations on my podcast with engineering leaders about how other engineering teams are adopting AI, and a few patterns keep coming up.
For most teams, the practical entry point is documentation. It's low risk, immediately valuable, & removes one of the most avoided tasks in engineering.
From there, move into testing. Let AI generate test cases, validate edge cases, & catch regressions. Once engineers see how much faster they ship with AI handling that layer, adoption moves naturally into code generation & review.
The progression matters. Each of these steps builds trust in what AI produces before handing it more responsibility. By the time engineers are using AI to generate & review code, they're not replacing their judgment, they're just applying it at a higher level.
Headcount and tooling can only get you so far. The real upskilling happens when engineers focus on orchestrating what AI produces rather than just prompting it.
Conclusion
We need to stop pretending what used to work in traditional SaaS can work in an AI-native world. The SaaS companies that survive the shift will be built on autonomous systems that optimize, decide, & adapt faster than any human operator can.
Sedai is built for this, because we've been building toward it from the start. Every decision we've made, from how we think about pricing, approach infrastructure efficiency, and build our teams comes down to one thing: you have to engineer autonomy into the foundation.
