Optimize compute, storage and data
Choose copilot or autopilot execution
Continuously improve with reinforcement learning
In this article we'll explore how to fully realize the potential of Karpenter’s unique real-time node selection by using AI to continuously configure Karpenter with the optimal configuration.
Karpenter is an open-source node provisioning project built for Kubernetes, sponsored by AWS. Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by provisioning nodes that meet the requirements of the pods (e.g., 10 CPUs, 20GB memory) that the Kubernetes scheduler
has marked as unschedulable. Karpenter removes the requirement to specify node type (e.g., m5.large) & node groups, although users can provide criteria to choose specific instances (e.g., use spot for certain workloads). It is the most common alternative to the default Kubernetes autoscaler for AWS users. See below for an overview of how Karpenter works:
Sedai runs your applications at its best performance, cost and availability. Sedai continuously monitors and learns your application to make informed decisions based on application infrastructure affinity. Sedai builds a clear understanding of your application, its stateful or stateless nature, CPU & memory intensity and response to horizontal or vertical scaling and the ideal and anomalous behavior of each of its dependencies. While Karpenter supports your Kubernetes Cluster Scheduler, Sedai focuses on cost, performance and availability from a holistic application & infrastructure perspective.
The combination of Karpenter and Sedai can provide superior overall performance, availability and cost. Karpenter’s real-time scaling of nodes (adding and picking node types) is a unique function not performed by the Sedai platform. Karpenter’s real-time scaling can be made more powerful by leveraging Sedai’s granular understanding of application needs and traffic prediction capability. Three use cases for Karpenter with Sedai are:
Initial setup of Sedai with Karpenter could be achieved by installing Karpenter through Sedai’s agent when the agent is connected to an AWS account with an additional installation step. Alternatively Karpenter can be added later. Karpenter’s compute provisioning for Kubernetes clusters is configured by a custom resource called Provisioner. Ongoing joint operation of Karpenter with Sedai would be achieved by Sedai sending configuration changes to Provisioner.
The diagram below shows Karpenter integrated into an example architecture.
Please reach out to me or the Sedai team for more details. They look forward to working with you to achieve the best possible cost, performance and availability for your Kubernetes applications.
August 14, 2024
August 19, 2024
In this article we'll explore how to fully realize the potential of Karpenter’s unique real-time node selection by using AI to continuously configure Karpenter with the optimal configuration.
Karpenter is an open-source node provisioning project built for Kubernetes, sponsored by AWS. Karpenter improves the efficiency and cost of running workloads on Kubernetes clusters by provisioning nodes that meet the requirements of the pods (e.g., 10 CPUs, 20GB memory) that the Kubernetes scheduler
has marked as unschedulable. Karpenter removes the requirement to specify node type (e.g., m5.large) & node groups, although users can provide criteria to choose specific instances (e.g., use spot for certain workloads). It is the most common alternative to the default Kubernetes autoscaler for AWS users. See below for an overview of how Karpenter works:
Sedai runs your applications at its best performance, cost and availability. Sedai continuously monitors and learns your application to make informed decisions based on application infrastructure affinity. Sedai builds a clear understanding of your application, its stateful or stateless nature, CPU & memory intensity and response to horizontal or vertical scaling and the ideal and anomalous behavior of each of its dependencies. While Karpenter supports your Kubernetes Cluster Scheduler, Sedai focuses on cost, performance and availability from a holistic application & infrastructure perspective.
The combination of Karpenter and Sedai can provide superior overall performance, availability and cost. Karpenter’s real-time scaling of nodes (adding and picking node types) is a unique function not performed by the Sedai platform. Karpenter’s real-time scaling can be made more powerful by leveraging Sedai’s granular understanding of application needs and traffic prediction capability. Three use cases for Karpenter with Sedai are:
Initial setup of Sedai with Karpenter could be achieved by installing Karpenter through Sedai’s agent when the agent is connected to an AWS account with an additional installation step. Alternatively Karpenter can be added later. Karpenter’s compute provisioning for Kubernetes clusters is configured by a custom resource called Provisioner. Ongoing joint operation of Karpenter with Sedai would be achieved by Sedai sending configuration changes to Provisioner.
The diagram below shows Karpenter integrated into an example architecture.
Please reach out to me or the Sedai team for more details. They look forward to working with you to achieve the best possible cost, performance and availability for your Kubernetes applications.