Learn how Palo Alto Networks is Transforming Platform Engineering with AI Agents. Register here

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close

Supercharging Karpenter with AI: How Sedai Takes Kubernetes Scaling to the Next Level

Last updated

August 19, 2024

Published
Topics
Last updated

August 19, 2024

Published

Reduce your cloud costs by 50%, safely

  • Optimize compute, storage and data

  • Choose copilot or autopilot execution

  • Continuously improve with reinforcement learning

CONTENTS

Supercharging Karpenter with AI: How Sedai Takes Kubernetes Scaling to the Next Level

In this article we'll explore how to fully realize the potential of Karpenter’s unique real-time node selection by using AI to continuously configure Karpenter with the optimal configuration.

Karpenter Overview

Karpenter is an open-source node provisioning project built for Kubernetes, sponsored by AWS. Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by provisioning nodes that meet the requirements of the pods (e.g., 10 CPUs, 20GB memory) that the Kubernetes scheduler
has marked as unschedulable. Karpenter removes the requirement to specify node type (e.g., m5.large) & node groups, although users can provide criteria to choose specific instances (e.g., use spot for certain workloads). It is the most common alternative to the default Kubernetes autoscaler for AWS users. See below for an overview of how Karpenter works:

Karpenter

How Karpenter Complements Sedai

Sedai runs your applications at its best performance, cost and availability. Sedai continuously monitors and learns your application to make informed decisions based on application infrastructure affinity. Sedai builds a clear understanding of your application, its stateful or stateless nature, CPU & memory intensity and response to horizontal or vertical scaling and the ideal and anomalous behavior of each of its dependencies. While Karpenter supports your Kubernetes Cluster Scheduler, Sedai focuses on cost, performance and availability from a holistic application & infrastructure perspective.

Combined Use Cases

The combination of Karpenter and Sedai can provide superior overall performance, availability and cost. Karpenter’s real-time scaling of nodes (adding and picking node types) is a unique function not performed by the Sedai platform. Karpenter’s real-time scaling can be made more powerful by leveraging Sedai’s granular understanding of application needs and traffic prediction capability. Three use cases for Karpenter with Sedai are:

  • Optimizing node types: Sedai’s understanding of application needs (e.g., performance vs cost sensitivity and evaluation of ideal memory/CPU ratios across applications based on performance and cost sensitivity) can inform Karpenter’s node choice (normally, Karpenter would use an approximation between lowest-price and capacity-optimized allocation strategy when selecting a node for provisioning).
  • Predictive scaling: Normally, Karpenter can only react when unschedulable pods exist. Sedai’s predictive scaling is performed at the workload/application level and allows Sedai to predict the need for additional pods in advance, reducing the risk of performance deterioration during periods of rising traffic.
  • Optimizing spot vs on demand purchasing decisions : The choice between spot and on-demand instances must be manually configured in Karpenter. Sedai can recommend the usage of spot instances, providing you with a “spot friendly” rating based on resource attributes (e.g., a workload is more likely to be friendly if it runs during a set time of day (say 12-1am), takes a significant time (e.g., >2 mins) to execute and can be restarted rapidly without errors. You can manually accept these recommendations which Sedai would implement through Karpenter. After sufficient production experience, spot choice could be switched to autonomous mode.

Integration Approach

Initial setup of Sedai with Karpenter could be achieved by installing Karpenter through Sedai’s agent when the agent is connected to an AWS account with an additional installation step. Alternatively Karpenter can be added later. Karpenter’s compute provisioning for Kubernetes clusters is configured by a custom resource called Provisioner. Ongoing joint operation of Karpenter with Sedai would be achieved by Sedai sending configuration changes to Provisioner.

The diagram below shows Karpenter integrated into an example architecture.

Please reach out to me or the Sedai team for more details. They look forward to working with you to achieve the best possible cost, performance and availability for your Kubernetes applications.

Was this content helpful?

Thank you for submitting your feedback.
Oops! Something went wrong while submitting the form.

CONTENTS

Supercharging Karpenter with AI: How Sedai Takes Kubernetes Scaling to the Next Level

Published on
Last updated on

August 19, 2024

Max 3 min
Supercharging Karpenter with AI: How Sedai Takes Kubernetes Scaling to the Next Level

In this article we'll explore how to fully realize the potential of Karpenter’s unique real-time node selection by using AI to continuously configure Karpenter with the optimal configuration.

Karpenter Overview

Karpenter is an open-source node provisioning project built for Kubernetes, sponsored by AWS. Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by provisioning nodes that meet the requirements of the pods (e.g., 10 CPUs, 20GB memory) that the Kubernetes scheduler
has marked as unschedulable. Karpenter removes the requirement to specify node type (e.g., m5.large) & node groups, although users can provide criteria to choose specific instances (e.g., use spot for certain workloads). It is the most common alternative to the default Kubernetes autoscaler for AWS users. See below for an overview of how Karpenter works:

Karpenter

How Karpenter Complements Sedai

Sedai runs your applications at its best performance, cost and availability. Sedai continuously monitors and learns your application to make informed decisions based on application infrastructure affinity. Sedai builds a clear understanding of your application, its stateful or stateless nature, CPU & memory intensity and response to horizontal or vertical scaling and the ideal and anomalous behavior of each of its dependencies. While Karpenter supports your Kubernetes Cluster Scheduler, Sedai focuses on cost, performance and availability from a holistic application & infrastructure perspective.

Combined Use Cases

The combination of Karpenter and Sedai can provide superior overall performance, availability and cost. Karpenter’s real-time scaling of nodes (adding and picking node types) is a unique function not performed by the Sedai platform. Karpenter’s real-time scaling can be made more powerful by leveraging Sedai’s granular understanding of application needs and traffic prediction capability. Three use cases for Karpenter with Sedai are:

  • Optimizing node types: Sedai’s understanding of application needs (e.g., performance vs cost sensitivity and evaluation of ideal memory/CPU ratios across applications based on performance and cost sensitivity) can inform Karpenter’s node choice (normally, Karpenter would use an approximation between lowest-price and capacity-optimized allocation strategy when selecting a node for provisioning).
  • Predictive scaling: Normally, Karpenter can only react when unschedulable pods exist. Sedai’s predictive scaling is performed at the workload/application level and allows Sedai to predict the need for additional pods in advance, reducing the risk of performance deterioration during periods of rising traffic.
  • Optimizing spot vs on demand purchasing decisions : The choice between spot and on-demand instances must be manually configured in Karpenter. Sedai can recommend the usage of spot instances, providing you with a “spot friendly” rating based on resource attributes (e.g., a workload is more likely to be friendly if it runs during a set time of day (say 12-1am), takes a significant time (e.g., >2 mins) to execute and can be restarted rapidly without errors. You can manually accept these recommendations which Sedai would implement through Karpenter. After sufficient production experience, spot choice could be switched to autonomous mode.

Integration Approach

Initial setup of Sedai with Karpenter could be achieved by installing Karpenter through Sedai’s agent when the agent is connected to an AWS account with an additional installation step. Alternatively Karpenter can be added later. Karpenter’s compute provisioning for Kubernetes clusters is configured by a custom resource called Provisioner. Ongoing joint operation of Karpenter with Sedai would be achieved by Sedai sending configuration changes to Provisioner.

The diagram below shows Karpenter integrated into an example architecture.

Please reach out to me or the Sedai team for more details. They look forward to working with you to achieve the best possible cost, performance and availability for your Kubernetes applications.

Was this content helpful?

Thank you for submitting your feedback.
Oops! Something went wrong while submitting the form.