Sedai Logo

Autonomy Is a Context Problem, Not a Model Problem

SM

Suresh Mathew

Founder & CEO

February 13, 2026

Autonomy Is a Context Problem, Not a Model Problem

Featured

Agentic systems are rapidly moving into real-world production environments, and we’re facing deeper conversations about what autonomy actually means for SaaS. We’re no longer asking whether these systems can act without human input, but whether they can truly behave autonomously over time.

Recently, I read an article about SaaS systems “surviving the shift to agents.” Part of this shift is reckoning with how the underlying agentic tech, like large language models (LLMs), can push SaaS towards true autonomy.

The truth: LLMs are routinely mistaken for autonomous systems because they appear intelligent. They’re probabilistic reasoners, capable of synthesizing information and chaining together rules, heuristics, and processes. The end result feels surprisingly human.

But most systems built around LLMs — including many agentic systems — still operate within fixed, human-defined constraints. They may reason probabilistically, but the system itself reacts to inputs, follows predefined logic, and executes workflows.

That’s powerful, but it’s not the same thing as a system that understands why it acted, what changed as a result, and how that should influence the next decision. 

And without that understanding, LLM-based systems can easily break when put to the test.

How Context Graphs Support Autonomy

We’re entering the next phase of the AI arms race. The leap to truly autonomous systems — systems that understand the impact and results of their decisions —  will take more than just bigger and better LLMs. Because even the best LLMs are limited to the rules we as humans put into place.

This is where context graphs come in.

Context graphs are emerging as a way to bridge an LLMs’ chain-of-thought with contextualized memory. Because context graphs are built from decision traces, they can enable systems to understand all previous decisions — from the initial context to the ultimate results.

As Jamin Ball noted in his recent Substack post about systems of record, “Humans can hold nuance in their heads.” If we are to truly create an autonomous system, it must retain and reason over nuance as well.

This is what begins to separate automation from autonomy

How Sedai Builds Autonomy with Context

What’s interesting is this isn’t a new idea for us.

Sedai is a cloud optimization platform that autonomously cuts cloud costs. Because we work in high-stakes production environments, safety is our number one priority, and context is the key to safety. 

In our experience, automated optimization lacks nuance and is unsafe. Without context, anything that goes wrong beyond set parameters can bring down production.

At Sedai, we started with a closed control loop — to observe, decide, act, and learn — and designed the platform around capturing context. This was the only way learning worked safely in production.

Cost optimization was never the goal when we built Sedai, it was a byproduct. Sedai’s core has always been to understand application behavior in context over time and learn from the outcomes of every action taken.

That’s why we consistently drive significantly better results than traditional automated optimization platforms.

The industry may be catching up to the language of context graphs now. But for us, context has always been foundational. That’s how we’ve built Sedai from day one, and that’s how our autonomy will continue to evolve in the future.

LLMs are incredible at reasoning in the moment — but without context, they’re still making decisions in the dark.

If you're interested in learning about the autonomous platform we've built at Sedai, check out this page.

Autonomously Cut Costs. Safely.

See how Sedai is the only safe way to autonomously reduce cloud costs.

ok