Frequently Asked Questions

Kubernetes Optimization & Case Study Details

How much did the major Kubernetes user save using Sedai's autonomous optimization?

The major Kubernetes user saved $500,000 in cloud costs within 60 days by leveraging Sedai's autonomous optimization. This represented a 25% cost reduction, primarily focused on their Development/Test environments. [Source]

What environments were optimized in the case study?

The optimization project focused on Development, User Acceptance Testing (UAT), and Staging environments within Kubernetes. These non-production environments were targeted for cost savings without impacting production performance. [Source]

What challenges did the company face with Kubernetes optimization?

The company faced several challenges, including the complexity of managing numerous Kubernetes services, rapid business growth, quick CI/CD release cycles, high performance demands, and expectations for cloud cost efficiency. Additionally, the microservice architecture made it difficult to justify manual optimization due to the small spend per service. [Source]

Why was autonomous optimization chosen over manual optimization?

Autonomous optimization was chosen because manual optimization would have required significant engineering time, with the opportunity cost outweighing the potential savings for many services. Sedai's autonomous approach aggregated small improvements across many services, resulting in substantial overall savings. [Source]

How did Sedai implement autonomous optimization in the customer's environment?

The customer used a 'bring your own cloud' deployment model, running Sedai inside their Google Kubernetes Engine (GKE) environments. Sedai was granted permissions to map account topology and analyze workload behavior, which is foundational for AI-based optimization. [Source]

What was the main optimization goal in this project?

The main goal was to achieve cost optimization for Kubernetes resources while maintaining current performance levels. Sedai was instructed to find the lowest cost configurations without sacrificing service quality. [Source]

How many Kubernetes services did Sedai assess during the optimization?

Sedai assessed 1,400 Kubernetes services in the customer's environment, making recommendations for resource optimization at both the workload and cluster levels. [Source]

What types of recommendations did Sedai provide for Kubernetes optimization?

Sedai provided recommendations at two levels: workload level (alternative CPU, memory, and pod count) and Kubernetes cluster level (number and type of instances, node groupings). These recommendations were reviewed and implemented weekly by the customer. [Source]

What is 'rightsizing' in the context of Kubernetes optimization?

Rightsizing refers to adjusting resource allocations (CPU, memory, pod count) to match actual workload requirements, avoiding overprovisioning and reducing costs. In this case study, rightsizing was the highest impact tactic in dev/test environments. [Source]

How were Sedai's optimization recommendations implemented?

In the initial phase, Sedai's recommendations were manually reviewed and implemented weekly by the customer. This allowed for careful validation and stability checks before moving toward more automated operations. [Source]

What were the average savings per service in the case study?

The average savings per service were less than $400 per year. While individual savings were small, Sedai's autonomous approach aggregated these across 1,400+ services for substantial total savings. [Source]

Why was it previously uneconomic to pursue these savings manually?

Because each service had a relatively small annual spend, the engineering effort required for manual optimization would have outweighed the benefits. Sedai's autonomous system made it feasible to capture these savings at scale. [Source]

What future directions are planned for Kubernetes optimization with Sedai?

The customer plans to expand optimization to additional dev/test environments, use advanced Sedai features (e.g., for ML workloads), move toward more automated and autonomous operations, and extend optimization to production environments. [Source]

How does Sedai address the risk of human error in Kubernetes optimization?

Sedai's autonomous approach reduces the risk of human error by automating repetitive and complex optimization tasks, which also helps alleviate engineering stress and burnout. [Source]

What is Sedai's 'non-production mode' and how was it used?

Sedai's non-production mode allows for more aggressive optimization in dev/test environments, where the consequences of performance or availability issues are less critical. This enabled greater cost savings without impacting production. [Source]

How does Sedai help with optimizing Kubernetes microservices?

Sedai analyzes workload behavior and recommends rightsizing actions for each microservice, even when individual savings are small. By automating this process, Sedai aggregates small improvements into significant overall savings. [Source]

What additional cost optimization strategies are planned beyond rightsizing?

Future strategies include purchasing optimizations (spot instances, savings plans, reserved instances) and scheduled shutdowns for Kubernetes environments, tailored to global operations and team schedules. [Source]

How does Sedai's approach benefit engineering teams?

Sedai's autonomous optimization reduces manual toil, lowers stress and burnout, and allows engineers to focus on higher-value tasks instead of repetitive optimization work. [Source]

What is the significance of using AI-based autonomous optimization for Kubernetes?

AI-based autonomous optimization enables organizations to uncover savings across a large number of microservices, which would be impractical to achieve manually. It ensures cost efficiency, performance, and reliability at scale. [Source]

Features & Capabilities

What features does Sedai offer for cloud optimization?

Sedai offers autonomous cloud cost optimization, performance improvement, availability enhancement, Smart SLOs, release intelligence, full-stack cloud coverage, and proactive issue resolution. These features help organizations reduce costs, improve performance, and ensure reliability. [Source]

Does Sedai support Kubernetes optimization?

Yes, Sedai provides dedicated Kubernetes optimization capabilities, including rightsizing, workload and cluster-level recommendations, and support for both production and non-production environments. [Source]

What integrations does Sedai offer?

Sedai integrates with major container platforms such as Amazon EKS, Azure AKS, Google GKE, Kubernetes, Amazon ECS, AWS Fargate, Openshift, Rancher, IBM Cloud, Alibaba Container, Digital Ocean, VMWare Tanzu, Oracle, and Platform9. It also covers VMs, serverless, storage, and data/streaming workloads. [Source]

What is Sedai's 'Smart SLOs' feature?

Smart SLOs automatically set and monitor Service Level Objectives based on past performance, alerting for breaches and ensuring reliability and uptime without manual intervention. [Source]

How does Sedai's release intelligence improve software deployments?

Sedai's release intelligence tracks changes in cost, latency, and errors for each deployment, ensuring smoother releases and minimizing errors. This has helped companies like Freshworks improve their software release processes. [Source]

Use Cases & Business Impact

What business impact can customers expect from using Sedai?

Customers can expect significant cost savings (e.g., KnowBe4 achieved 50% savings, Palo Alto Networks saved $3.5 million), improved application performance (e.g., Belcorp reduced AWS Lambda latency by 77%), higher availability, increased operational efficiency, and a calculated ROI of 762% with a payback period of just 3 months. [Source]

Who can benefit from Sedai's platform?

Sedai is designed for cloud engineers, DevOps teams, IT managers, site reliability engineers (SREs), finance teams, enterprises, mid-sized businesses, and startups seeking to optimize cloud costs, performance, and operational efficiency. [Source]

What industries are represented in Sedai's case studies?

Industries include cybersecurity (Palo Alto Networks), information technology (HP), information services (Experian), security awareness training (KnowBe4), beauty and cosmetics (Belcorp), recreational services (Campspot), background screening (Inflection), and customer engagement software (Freshworks). [Source]

Can you share specific customer success stories with Sedai?

Yes. KnowBe4 achieved up to 50% cost savings, Palo Alto Networks saved $3.5 million, Belcorp reduced AWS Lambda latency by 77%, and Freshworks improved release quality and user satisfaction. [Source]

Technical Requirements & Implementation

How long does it take to implement Sedai?

Sedai's plug-and-play implementation takes just 5 minutes for general use and 15 minutes for specific use cases like AWS Lambda. The platform offers agentless integration and comprehensive onboarding support. [Source]

What support resources are available for onboarding?

Sedai provides onboarding calls, detailed documentation, and a Slack community for real-time support. Enterprise customers receive a dedicated Customer Success Manager. [Source]

Is technical documentation available for Sedai?

Yes, comprehensive technical documentation is available at docs.sedai.io/get-started, including detailed guides and instructions for onboarding and platform capabilities. [Source]

Security & Compliance

What security and compliance certifications does Sedai have?

Sedai is SOC 2 certified, demonstrating adherence to stringent security requirements and industry standards for data protection and compliance. [Source]

Competition & Differentiation

How does Sedai differ from other cloud optimization platforms?

Sedai offers 100% autonomous optimization, proactive issue resolution, full-stack cloud coverage, enterprise-grade governance, and proven ROI. Unlike traditional tools, Sedai optimizes based on real application behavior and automates operational tasks, delivering rapid value and cost savings. [Source]

What unique features set Sedai apart from competitors?

Unique features include Smart SLOs, release intelligence, plug-and-play implementation (5-minute setup), and agentless integration. Sedai's AI-driven platform delivers a calculated ROI of 762% and a payback period of just 3 months. [Source]

Why should a customer choose Sedai over alternatives?

Customers should choose Sedai for its autonomous optimization, proactive issue resolution, comprehensive cloud coverage, rapid ROI, and proven customer success stories. Sedai balances cost efficiency, performance, and reliability for businesses of all sizes. [Source]

Sedai Logo

Rightsizing Kubernetes Dev/Test Environments: Saving $500K/yr in 60 Days

HC

Hari Chandrasekhar

Content Writer

April 30, 2024

Rightsizing Kubernetes Dev/Test Environments: Saving $500K/yr in 60 Days

Featured

Introduction

A major Kubernetes user saved $500,000 in cloud costs within 60 days using Sedai's autonomous optimization.  This saving represented a 25% cost reduction. In this initial phase of our Kubernetes cost optimization project we focused on optimizing their Development/Test environments.  This included development, user acceptance testing (UAT) and staging.

Why did they go Autonomous?

Kubernetes Optimization Challenges

The company was seeing a significant strain on their engineering teams due to a combination of factors: 

  • complexity of managing numerous Kubernetes services
  • rapid growth in the business including expanding functionality and growing end user traffic 
  • quick release cycles through their CI/CD pipeline
  • demands for high performance given the real-time nature of their services
  • expectations for cloud cost efficiency. 

Streamline operations and optimizing cost and performance without using a large amount of engineering time would be beneficial.

In Dev/Test specifically they faced the challenge of implementing separate configurations than production:

  • Traffic is lower so the dev configurations should use fewer cloud resources
  • The consequences of performance or availability issues are less important so we can operate with less of a buffer.

In addition the microservice architecture of Kubernetes made reducing resource usage and cost challenging.   Each service had a relatively small spend, often just a few thousand dollars a year. Under the traditional manual optimization model the savings would not justify the opportunity cost of diverting engineers to optimization tasks. 

The company also wanted to avoid the complexity of running Kubernetes leading to high stress and burnout.  This was a risk to  talent retention in the long term.  There was also a safety risk due to the risk of human error when engineers were tasked with large volumes of optimization tasks.  

Implementing Autonomous Optimization

The customer adopted Sedai using a "bring your own cloud" deployment model to meet their security and access requirements.  In this model, Sedai would run inside their Google Kubernetes Engine (GKE) environments.

Sedai was granted permissions to map the account topology and understand behavior patterns.   Understanding how workloads behave is foundational to Kubernetes cost optimization in an AI-based autonomous approach.

Optimization goals were then set up.  The focus was cost optimization and not Kubernetes application performance. Sedai was instructed to find the lowest cost for these Kubernetes resources while maintaining current performance.

Once Sedai had access to the Kubernetes metrics, it was able to quickly assess 1,400 Kubernetes services.  Sedai then made recommendations for how to optimize resource consumption.

Rightsizing was the highest impact tactic in the dev/test environments.  We used Sedai’s non-production mode, which permits a more aggressive optimization approach. Sedai then recommended the best configurations for Kubernetes resources.  Recommendations were made at two levels:

  • workload level, where Sedai recommended alternative CPU, memory and pod count
  • Kubernetes cluster level, where Sedai recommended the number and type of instances and node groupings

Often an individual services would go through multiple cycles of changes to make sure the service stayed stable.

These Kubernetes rightsizing suggestions were manually reviewed and implemented weekly by the customer in the initial phase.

Outcomes Achieved

Optimization efforts driven by rightsizing resulted in $500,000 of cloud spend savings on an annual run rate basis.  These extended across over 1,400 Kubernetes resources.   Below is the latest view of the Sedai dashboard for the account showing the total savings and some of the individual services (there are over 100 pages listing all the services!).

66b5f3ae82ba7bf70a1b0d41_663441e387d392df80989ed0_Rightsizing-20Kubernetes-20in-20Dev-3ATest-20with-20Sedai-20Autonomous-20Optimization.webp

Sedai determined that many environments were oversized, often replicating production configurations.  This was not required given the lower traffic and resources required. 

The average savings per service were surprisingly small  (<$400/year).  This confirmed the customer’s views that it would not have been economic previously to pursue these savings.  The screenshots below show some of the optimization details for one of the services.  In this case an overall saving of around $4,000/year after five optimization actions were taken.  Some of the gains were small with the optimization shown in the middle panel providing just $398/year of savings by adjusting Kubernetes requests and limits:

6631a6ed86242b520e4428c2_9VMoXahWgocUJJOoLofMCtb5gXPpNDduECmS5VzG15TAzM9sv5lETv0HKW6VW7qtSVHGyFCDQlAe9RN2aTC3rYNEeY3sqUJuwL-su4lQcS-6CQqlaRyaM-TTtZN92z-Jh9-FDuTm8zg81lJrZjW0eUk.webp

 

Previously the cost of allocating engineering resources would have outweighed the benefits for many services. Sedai’s autonomous approach aggregated these small improvements into substantial overall Kubernetes cost savings.

Future Direction of Kubernetes Optimization

The customer is planning to expand their optimization efforts further:

  • Adding the rest of the dev/test environments.  We are currently setting up Sedai to optimize another cluster.
  • Using more advanced Sedai features (e.g., unique settings for ML based workloads).
  • Moving from manual to automated to autonomous operations
  • Extending optimization to production environments

As well as engineering optimizations we will also be looking holistically at their Kubernetes infrastructure costs including purchasing optimizations including spot instances, savings plans and reserved instances.  We’ll also look at scheduled shutdowns for Kubernetes environments.  These shutdowns would need to be compatible with the global operations and variable team working hours.

Conclusion

The use of AI based autonomous optimization to uncover Kubernetes savings in dev/test environments has been an effective way to reduce costs.  We were able to overcome the challenge of optimizing allocated resources across a large number of Kubernetes microservices. This first experience in dev/test has been a significant step in the journey to full autonomy.