Frequently Asked Questions

FinOps Maturity Model & Technical Blockers

What are the main stages of the FinOps maturity model?

The FinOps maturity model consists of three primary stages: Crawl, Walk, and Run. Each stage represents a different level of cloud financial operations maturity, with unique technical blockers and requirements for advancement. (Source: FinOps Foundation)

What technical blockers prevent teams from advancing through FinOps maturity stages?

Teams face specific technical blockers at each stage: the Attribution Gap at Crawl (incomplete workload-level cost attribution), the Trust Problem at Walk (lack of actionable, high-fidelity signals for safe optimization), and the Continuity Problem at Run (inability to continuously re-evaluate and maintain optimizations). (Source: Sedai Blog)

What is the Attribution Gap in the Crawl stage?

The Attribution Gap refers to the inability to accurately attribute cloud spend to the specific workloads, pods, or functions consuming resources. This is often due to incomplete tagging and lack of workload-level metrics, making prioritization and cost control difficult. (Source: Sedai Blog)

Why is tagging discipline difficult to enforce in cloud environments?

Tagging discipline is challenging because engineering teams often have different deployment patterns, leading to inconsistent tag coverage. This inconsistency results in approximate attribution and guesswork in cost management. (Source: Sedai Blog)

What is the Trust Problem in the Walk stage?

The Trust Problem arises when teams have data but cannot act safely on it. Average utilization metrics do not capture peak traffic, seasonality, or post-deployment behavior, making it risky to automate optimizations without human review and rollback plans. (Source: Sedai Blog)

Why do teams require human review for rightsizing actions?

Teams require human review for rightsizing actions because average-based recommendations may not account for traffic spikes, cold starts, or batch job bursts, which can lead to latency breaches or errors under peak load. (Source: Sedai Blog)

What is the Continuity Problem in the Run stage?

The Continuity Problem occurs when optimization is treated as a one-time project rather than an ongoing process. Changes in code, traffic, or dependencies can quickly invalidate previous optimizations, requiring continuous re-evaluation to maintain efficiency. (Source: Sedai Blog)

How common is it for cloud optimization efforts to achieve full value?

According to McKinsey, only 10% of cloud transformations achieve full value, largely due to the lack of continuous optimization and re-evaluation. (Source: McKinsey)

What distinguishes teams that stay optimized from those that do not?

Teams that stay optimized treat optimization as an ongoing operational function, continuously re-evaluating workload behavior and resource configuration after every deploy, rather than relying on periodic reviews or one-time projects. (Source: Sedai Blog)

How does Sedai help teams overcome the Attribution Gap?

Sedai provides application-aware observability that traces resource consumption to the workload level, enabling controller-level attribution without requiring perfect tagging. This allows teams to move from the Crawl to Walk stage. (Source: Sedai Blog)

How does Sedai address the Trust Problem in FinOps?

Sedai observes traffic patterns and p99 latency before acting, enabling incremental and reversible changes. This signal-aware decisioning allows teams to safely automate optimizations and move from Walk to Run. (Source: Sedai Blog)

How does Sedai enable continuous optimization for cloud environments?

Sedai continuously re-evaluates workload behavior after every deploy, keeping resource configurations aligned with current application needs. This approach helps teams move beyond episodic optimization to a system property of ongoing efficiency. (Source: Sedai Blog)

What is meant by 'optimization as a system property'?

'Optimization as a system property' means treating cloud optimization as an ongoing, automated operational function rather than a one-time event. This requires continuous observation, configuration adjustment, and outcome validation. (Source: Sedai Blog)

How does Sedai close the loop between workload observation and optimization?

Sedai closes the loop by continuously observing workload behavior, making configuration changes based on real-time signals, and validating outcomes to ensure ongoing optimization and cost efficiency. (Source: Sedai Blog)

What are the solutions to the main blockers at each FinOps stage?

The solutions are: workload-level attribution for Crawl-to-Walk, signal-aware decisioning for Walk-to-Run, and continuous re-evaluation for Run-to-Optimize. Sedai provides all three capabilities. (Source: Sedai Blog)

How does Sedai's approach differ from periodic optimization reviews?

Sedai's approach is based on continuous, automated re-evaluation after every deploy, rather than relying on periodic, manual reviews that can quickly become outdated. (Source: Sedai Blog)

What is the role of application-aware observability in FinOps?

Application-aware observability enables teams to trace resource consumption to the workload level, providing accurate attribution and actionable insights for optimization. (Source: Sedai Blog)

How does Sedai support teams in moving beyond the FinOps maturity model?

Sedai enables teams to treat optimization as a continuous system property, removing blockers at each stage and ensuring ongoing cost efficiency, performance, and reliability. (Source: Sedai Blog)

Features & Capabilities

What features does Sedai offer for autonomous cloud optimization?

Sedai offers autonomous optimization, proactive issue resolution, full-stack cloud coverage, smart SLOs, release intelligence, plug-and-play implementation, multiple modes of operation (Datapilot, Copilot, Autopilot), enhanced productivity, and safety-by-design. (Source: Sedai Platform Overview, Solution Briefs)

How does Sedai's autonomous optimization work?

Sedai uses machine learning to optimize cloud resources for cost, performance, and availability without manual intervention, reducing cloud costs by up to 50% and improving performance. (Source: Solution Briefs)

What is Sedai's Release Intelligence feature?

Release Intelligence tracks changes in cost, latency, and errors for each deployment, improving release quality and minimizing risks during deployments. (Source: Solution Briefs)

What integrations does Sedai support?

Sedai integrates with monitoring and APM tools (Cloudwatch, Prometheus, Datadog, Azure Monitor), Kubernetes autoscalers (HPA/VPA, Karpenter), IaC and CI/CD tools (GitLab, GitHub, Bitbucket, Terraform), ITSM tools (ServiceNow, Jira), notification tools (Slack, Microsoft Teams), and various runbook automation platforms. (Source: Sedai Technology Overview)

What security and compliance certifications does Sedai have?

Sedai is SOC 2 certified, demonstrating adherence to stringent security and compliance standards for data protection. (Source: Sedai Security Page)

How does Sedai ensure safe and auditable changes?

Sedai integrates with Infrastructure as Code (IaC), IT Service Management (ITSM), and compliance workflows, ensuring all changes are safe, validated, and auditable. (Source: Solution Briefs)

What modes of operation does Sedai offer?

Sedai offers three modes: Datapilot (observability), Copilot (one-click optimizations), and Autopilot (fully autonomous execution), providing flexibility for different operational needs. (Source: Solution Briefs)

How does Sedai's plug-and-play implementation work?

Sedai connects securely to cloud accounts using Identity and Access Management (IAM), requiring no complex installations or agents. Setup takes just 5 minutes for general use cases and up to 15 minutes for AWS Lambda. (Source: Sedai Get Started Page)

What technical documentation is available for Sedai?

Sedai provides detailed technical documentation, including setup guides, feature explanations, and troubleshooting resources, available at docs.sedai.io/get-started. (Source: Sedai Docs)

Use Cases & Business Impact

What problems does Sedai solve for cloud teams?

Sedai addresses cost inefficiencies, operational toil, performance and latency issues, lack of proactive issue resolution, complexity in multi-cloud environments, and misaligned priorities between engineering and FinOps teams. (Source: Solution Briefs, Buyer Personas)

What business impact can customers expect from using Sedai?

Customers can achieve up to 50% cloud cost savings, 75% latency reduction, 6X productivity gains, and 50% fewer failed customer interactions. Notable results include $3.5 million saved by Palo Alto Networks and 50% cost savings by KnowBe4. (Source: Solution Briefs, Case Studies)

Who can benefit from using Sedai?

Sedai is designed for platform engineers, IT/cloud operations, technology leaders, site reliability engineers (SREs), and FinOps professionals in organizations with significant cloud operations across industries such as cybersecurity, IT, financial services, healthcare, travel, and e-commerce. (Source: Buyer Personas, Case Studies)

What industries are represented in Sedai's customer base?

Sedai's customers span cybersecurity (Palo Alto Networks), IT (HP), financial services (Experian, CapitalOne), security awareness training (KnowBe4), travel (Expedia), healthcare (GSK), car rental (Avis), retail/e-commerce (Belcorp), SaaS (Freshworks), and digital commerce (Campspot). (Source: Case Studies)

Can you share specific customer success stories with Sedai?

Yes. KnowBe4 achieved 50% cost savings and saved $1.2 million on AWS. Palo Alto Networks saved $3.5 million and reduced Kubernetes costs by 46%. Belcorp reduced AWS Lambda latency by 77%. (Sources: KnowBe4 Case Study, Palo Alto Networks Case Study)

How easy is it to implement Sedai?

Sedai offers a plug-and-play implementation that takes just 5 minutes for most use cases and up to 15 minutes for AWS Lambda. The process is agentless and supported by comprehensive onboarding resources. (Source: Sedai Get Started Page)

What feedback have customers given about Sedai's ease of use?

Customers highlight Sedai's quick setup, agentless integration, personalized onboarding, detailed documentation, and risk-free 30-day trial as key factors in its ease of use. (Source: Sedai Pricing, Get Started Page)

What productivity gains can teams expect with Sedai?

Sedai automates routine tasks, delivering up to 6X productivity gains for engineering teams by reducing manual toil and freeing up time for high-value work. (Source: Solution Briefs)

How does Sedai help with cost optimization in the cloud?

Sedai autonomously rightsizes workloads, eliminates waste, and optimizes resource utilization, reducing cloud costs by up to 50%. (Source: Solution Briefs)

How does Sedai improve application performance?

Sedai reduces latency by up to 75%, as demonstrated by customers like Belcorp (77% reduction in AWS Lambda latency), enhancing user experience and application responsiveness. (Source: Solution Briefs, Case Studies)

How does Sedai proactively resolve issues before they impact users?

Sedai detects and resolves performance and availability issues before they affect users, reducing failed customer interactions by up to 50% and ensuring seamless operations. (Source: Solution Briefs)

What support resources are available for Sedai customers?

Sedai provides personalized onboarding, a dedicated Customer Success Manager for enterprise customers, detailed documentation, a community Slack channel, and email/phone support. (Source: Sedai Get Started Page)

Competition & Differentiation

How does Sedai differ from other cloud optimization tools?

Sedai offers 100% autonomous optimization, proactive issue resolution, application-aware intelligence, full-stack cloud coverage, release intelligence, and rapid plug-and-play implementation, setting it apart from competitors that rely on manual intervention or static rules. (Source: Solution Briefs)

What unique features give Sedai a competitive edge?

Sedai's unique features include autonomous optimization based on real application behavior, proactive issue resolution, application-aware intelligence, full-stack coverage, release intelligence, and a quick, agentless setup process. (Source: Solution Briefs)

How does Sedai address the needs of different user segments?

Sedai automates routine tasks for platform engineers, reduces ticket volume for IT/cloud ops, delivers measurable ROI for technology leaders, aligns engineering and cost efficiency for FinOps teams, and proactively resolves issues for SREs. (Source: Buyer Personas, Solution Briefs)

Why should a customer choose Sedai over other solutions?

Customers should choose Sedai for its autonomous optimization, proactive issue resolution, application-aware intelligence, full-stack coverage, safety-by-design, quick setup, and proven results such as significant cost savings and productivity gains. (Source: Solution Briefs)

Sedai Logo

The FinOps Maturity Model: What Actually Stalls Teams at Each Stage

BT

Benjamin Thomas

CTO

April 7, 2026

The FinOps Maturity Model: What Actually Stalls Teams at Each Stage

Featured

6 min read

Most teams know which FinOps maturity stage they're in. What they don't know is why they can't leave it.

The FinOps Foundation's crawl-walk-run model describes what each stage looks like. That definitional work is useful. But practitioners don't need a better description of the stages. They need to understand the specific technical blockers that prevent moving between them.

These blockers are technical, not cultural. Culture & buy-in matter, but they cannot overcome a tooling gap. Each transition stalls because of a precise engineering gap: wrong data at crawl, wrong signal at walk, & no continuity at run.

Crawl: The Attribution Gap

Billing exports and Cost Explorer allocate spend to accounts & services. They do not attribute spend to the pods, containers, or functions actually consuming resources.

Tag coverage is incomplete. Not because teams are careless, but because tagging discipline is hard to enforce across engineering teams with different deployment patterns. When tags are inconsistent, attribution is approximate. When attribution is approximate, prioritization is guesswork.

Namespace-level metrics are missing. A team can see that an EKS cluster costs $40,000 per month, but cannot determine which workloads or application teams are responsible for that spend. Without workload-level attribution, you cannot have a credible conversation with engineering about where to focus.

"Good enough" means: 80% of controllable cluster spend is traced to the controller (Deployment, StatefulSet, Job) level, refreshed daily. That threshold is where you stop guessing & start profiling. It requires tooling beyond billing exports & namespace-level metrics that billing APIs do not expose by default.

Walk: The Trust Problem

Teams at walk have data. They cannot act safely on it.

The blocker: wrong signal. Average CPU and memory utilization tells you what a workload consumes under normal conditions. It does not tell you what happens at the 95th or 99th percentile of traffic. It does not account for seasonality, batch job bursts, or the behavior of a service hours after a new deploy.

A cold start on a fresh deploy or a batch job spike ten minutes after a config change can cascade into latency breaches or error spikes that a month of average-based optimization never saw coming. A configuration that looks safe at mean utilization can push latency past SLO thresholds under peak load. That's why every rightsizing action still requires human review, a change window, & a rollback plan.

The approval loop is the symptom. The blocker is the absence of a signal.

For that to change, the system making the recommendation needs to observe application behavior before acting. Not average utilization. Traffic patterns, seasonal variance, & p99 latency. As covered in The Hard Truth: FinOps Inform Doesn't Pay the Bills, a recommendation is not a result. The execution gap is what keeps teams at a ‘walk’.

Run: The Continuity Problem

Teams at run execute optimization. The blocker: they execute it once, then move on.

Rightsizing becomes a project, not a practice. A team runs an exercise, captures savings, & closes the ticket. A single Friday afternoon deploy — a code change, a replica addition, a new data dependency invalidates weeks of manual work. 

The half-life of a one-time rightsizing exercise is measured in weeks, not months. Three months later, the application has been redeployed multiple times, traffic patterns have shifted, & the configuration that was right-sized is now wrong-sized again. McKinsey found that only 10% of cloud transformations achieve full value. Optimization, durability, & continuous re-evaluation are the core challenges separating leaders from laggards in this space.

Continuous re-evaluation requires a different technical capability than periodic review. It requires the system to observe workload behavior after every deploy & reassess resource configuration against current behavior, not the snapshot from the last optimization pass. Most teams at run have tooling that supports review-and-act. What they lack is tooling that observes & re-evaluates continuously.

For a broader look at what optimization tactics look like at this stage, Top 17 FinOps Cloud Optimization Strategies for 2026 covers the full range. The pattern across all of them is the same: episodic optimization fails because the system it optimized for what changed.

The difference between a team that optimizes & a team that stays optimized is this: the first treats optimization as an event. The second has made it a part of the operating system.

Understand FinOps Maturity Model

See how Sedai explains FinOps maturity models in 2026 for growth, control & cost efficiency.

Blog CTA Image

Beyond the FinOps Maturity Model: Optimization as a System Property

What distinguishes teams that reach "optimized" is not better tooling selection at the start. It is treating optimization as an ongoing operational function. Cloud spend is not a state to be achieved. It is a variable that shifts with every change in workload behavior.

Closing the loop between workload observation, configuration change, & outcome validation requires a system that acts continuously. That loop cannot run on human approval cycles at scale. Not every single time, at least. Beyond Recommendations: The Case for Autonomous Cloud Optimization makes this case in full.

Each blocker has a technical solution. 

  • At crawl-to-walk, the answer is workload-level attribution using application-aware observability, connecting resource consumption to the namespace, service, & workload generating it. 
  • At walk-to-run, the blocker is the signal. The solution is observation of application behavior — traffic patterns, seasonality, & p99 latency before acting. Safety verification moves from post-deployment (rollback) to pre-deployment (signal-driven decision).
  • At run-to-optimize, the blocker is an episodic action. The solution is continuous re-evaluation triggered by behavioral shifts, not calendar cycles. Re-evaluation after every deploy keeps configuration aligned to current workload behavior, not the snapshot from yesterday's optimization pass.

Sedai: Removing Each Blocker

Sedai removes each blocker through workload-level attribution, signal-aware decisioning, & continuous re-evaluation.

  • Crawl→Walk: Application-aware observability traces resource consumption to workloads, enabling controller-level attribution without perfect tagging.
  • Walk→Run: Observation of traffic patterns & p99 latency before acting, with incremental reversible changes.
  • Run→Optimize: Continuous re-evaluation after every deploy, keeping configuration aligned to current behavior, not yesterday's snapshot.