Frequently Asked Questions

Serverless vs. Kubernetes: Technical & Architectural Considerations

What are the main differences between serverless architectures like AWS Lambda and Kubernetes?

Serverless architectures such as AWS Lambda abstract away infrastructure management, allowing developers to focus on business logic and application code. Kubernetes, on the other hand, provides a container orchestration platform that offers more control and flexibility but requires dedicated engineering resources to manage clusters, state, and scaling. Serverless is generally simpler for stateless applications, while Kubernetes is better suited for complex, stateful, or high-throughput workloads.

Should all applications be built using serverless technologies?

Not all applications are best suited for serverless. While serverless (like AWS Lambda) offers simplicity and autonomy for stateless applications, some workloads may require the flexibility, control, or performance characteristics provided by Kubernetes. The choice depends on the application's requirements, engineering investment, and desired outcomes.

What are the main challenges of managing state in Kubernetes?

Managing state in Kubernetes introduces complexity, as it requires careful orchestration of persistent storage and database connections. Options include managing state outside Kubernetes (maximum choice, maximum effort), using cloud data services (less effort, less choice), or leveraging Kubernetes-native constructs like StatefulSets and DaemonSets. Each approach has trade-offs in terms of operational burden, flexibility, and automation.

Why do some organizations choose Kubernetes over serverless?

Organizations may choose Kubernetes for workloads that require high throughput, low latency, or specific engineering controls. Existing investments in Kubernetes expertise and infrastructure, as well as the need for custom orchestration or hybrid environments, can also influence this decision. However, Kubernetes introduces operational complexity and requires dedicated engineering resources.

What are the benefits of using serverless for stateless applications?

Serverless architectures like AWS Lambda simplify infrastructure management, reduce operational burden, and enable greater autonomy. For stateless applications, serverless can streamline deployment, scale automatically, and allow teams to focus on business logic rather than infrastructure, as highlighted by multiple panelists in the Sedai discussion.

How do panelists view vendor lock-in with serverless platforms?

Panelists generally agree that vendor lock-in is an acceptable trade-off for the benefits provided by serverless platforms like AWS Lambda. The value of offloading infrastructure management and focusing on core business logic often outweighs concerns about being tied to a specific cloud provider.

What factors should influence the choice between serverless and Kubernetes?

The choice depends on the application's requirements, such as the need for high throughput, low latency, engineering investment, and the ability to abstract complexity from developers. Both serverless and Kubernetes can achieve autonomy, but the best fit depends on the specific use case and organizational goals.

How does operational burden compare between Kubernetes and serverless?

Serverless architectures like AWS Lambda significantly reduce operational burden by offloading infrastructure management to the cloud provider. Kubernetes requires dedicated engineering resources to manage clusters, state, and scaling, which increases operational complexity. For startups and ventures focused on product-market fit, minimizing operational burden is often prioritized over cloud cost.

What is the consensus among panelists regarding serverless vs. Kubernetes for startups?

The consensus is that serverless technologies like AWS Lambda provide significant value for startups by reducing operational burden and allowing teams to focus on product development. As businesses scale and complexity increases, Kubernetes may become more viable due to the need for dedicated teams and custom workload management.

How do panelists recommend approaching the transition from Kubernetes to serverless?

Panelists recommend evaluating the specific needs of your applications and considering the trade-offs in operational complexity, engineering investment, and business focus. For stateless applications, transitioning to serverless can simplify operations and increase autonomy. However, existing investments in Kubernetes and the performance of current applications may justify maintaining Kubernetes for some workloads.

What are the main arguments for keeping some workloads on Kubernetes?

Some organizations keep workloads on Kubernetes due to existing engineering investments, satisfactory performance, and the need for specific controls or custom orchestration. Kubernetes can also be made autonomous, but it requires similar effort to serverless to achieve this level of automation.

How do panelists view the role of autonomy in both serverless and Kubernetes?

Panelists agree that both serverless and Kubernetes can achieve autonomy, but serverless typically provides it out-of-the-box by offloading infrastructure management. Kubernetes requires additional engineering effort to reach similar levels of autonomy, often through automation and orchestration tools.

What are the risks of managing Kubernetes clusters for startups?

Managing Kubernetes clusters introduces risks such as dependency on specialized engineering talent, increased operational complexity, and the potential for infrastructure management to distract from core business objectives. Losing key engineers with Kubernetes expertise can also pose continuity challenges.

How does offloading infrastructure management to a cloud provider benefit organizations?

Offloading infrastructure management to a cloud provider through serverless architectures allows organizations to focus on core business logic, reduces operational burden, and increases agility. This approach is particularly beneficial for startups and teams with limited engineering resources.

What is the impact of engineering investment on the choice between serverless and Kubernetes?

The level of engineering investment required is a key factor in choosing between serverless and Kubernetes. Serverless reduces the need for specialized infrastructure management, while Kubernetes requires ongoing investment in engineering talent and operational processes.

How do panelists address the complexity of Kubernetes for stateless applications?

Panelists argue that running stateless applications on Kubernetes adds unnecessary complexity. Serverless architectures like Lambda are simpler and more autonomous for these use cases, allowing teams to avoid the operational overhead of managing Kubernetes clusters.

What are the trade-offs of using cloud data services outside Kubernetes?

Using cloud data services outside Kubernetes reduces operational effort but limits choice and flexibility. Organizations may be constrained by the features and configurations offered by the cloud provider, and may face challenges with compliance and performance tuning.

How do StatefulSets and DaemonSets help manage state in Kubernetes?

StatefulSets assign unique IDs to keep application and database containers connected, while DaemonSets ensure a pod runs on every node in the cluster. These Kubernetes-native constructs help manage stateful workloads but add complexity compared to stateless serverless architectures.

What is the value-to-cost ratio in choosing between Kubernetes and serverless?

Panelists emphasize that minimizing operational burden is often more valuable than reducing cloud costs, especially for startups. Serverless architectures provide a favorable value-to-cost ratio by reducing the need for infrastructure management and enabling faster product development.

Features & Capabilities of Sedai

What is Sedai and what does it do?

Sedai is an autonomous cloud management platform that creates the world’s first self-driving cloud. It eliminates manual toil for engineers by autonomously optimizing cloud resources for cost, performance, and availability using patented, safety-first machine learning. Sedai ensures every optimization is safe, validated, and reversible, never causing incidents or breaching SLOs.

What are the core features of Sedai's autonomous cloud optimization platform?

Sedai's platform autonomously optimizes compute, storage, and data across AWS, Azure, GCP, and Kubernetes. Key features include cost reduction (up to 50%), latency reduction (up to 75%), proactive issue resolution, release intelligence, enterprise-grade governance, and multiple modes of operation (Datapilot, Copilot, Autopilot).

How does Sedai ensure safe, autonomous optimizations in production?

Sedai is the only cloud optimization platform patented for making safe, autonomous optimizations in production. It performs slow, gradual changes with continuous validation checks, ensuring no incidents or SLO breaches. Every optimization is constrained, validated, and reversible, prioritizing safety above all else.

What integrations does Sedai support?

Sedai integrates with leading monitoring and APM tools (Cloudwatch, Prometheus, Datadog, Azure Monitor), Kubernetes autoscalers (HPA/VPA, Karpenter), IaC and CI/CD tools (GitLab, GitHub, Bitbucket, Terraform), ITSM platforms (ServiceNow, Jira), notification tools (Slack, Microsoft Teams), and various runbook automation platforms.

What is Sedai for S3 and what does it offer?

Sedai for S3 optimizes Amazon S3 costs by managing Intelligent-Tiering and Archive Access Tier selection. It delivers up to 30% cost efficiency gain and 3X productivity gain by reducing manual effort in S3 management.

How does Sedai's Release Intelligence feature work?

Sedai's Release Intelligence tracks changes in cost, latency, and errors for each deployment, improving release quality and minimizing risks during deployments. This ensures smoother releases and reduces the likelihood of errors impacting production.

What modes of operation does Sedai offer?

Sedai offers three modes: Datapilot (observability), Copilot (one-click optimizations), and Autopilot (fully autonomous execution). This flexibility allows organizations to choose the level of automation that fits their operational needs.

How does Sedai proactively resolve issues before they impact users?

Sedai detects and resolves performance and availability issues before they affect users, reducing failed customer interactions by up to 50% and ensuring seamless operations. This proactive approach enhances reliability and user experience.

What technical documentation and resources are available for Sedai?

Sedai provides detailed technical documentation, case studies, datasheets, and strategic guides. These resources are available at docs.sedai.io/get-started and sedai.io/resources.

Use Cases, Business Impact & Customer Success

Who can benefit from using Sedai?

Sedai is designed for platform engineering, IT/cloud operations, technology leadership, site reliability engineering (SRE), and FinOps professionals in organizations with significant cloud operations. It is ideal for companies in cybersecurity, IT, financial services, healthcare, travel, e-commerce, and SaaS sectors.

What business impact can customers expect from Sedai?

Customers can achieve up to 50% cloud cost savings, 75% latency reduction, 6X productivity gains, and 50% fewer failed customer interactions. Notable results include Palo Alto Networks saving $3.5 million and KnowBe4 achieving 50% cost savings in production. See more at Palo Alto Networks Case Study.

What are some real-world success stories with Sedai?

KnowBe4 saved $1.2 million on AWS and achieved 50% cost savings in production. Palo Alto Networks saved $3.5 million and reduced Kubernetes costs by 46%. Belcorp reduced AWS Lambda latency by 77%. See more at KnowBe4 Case Study and Palo Alto Networks Case Study.

Which industries are represented in Sedai's case studies?

Sedai's case studies cover cybersecurity (Palo Alto Networks), IT (HP), financial services (Experian, CapitalOne), security awareness training (KnowBe4), travel (Expedia), healthcare (GSK), car rental (Avis), retail/e-commerce (Belcorp), SaaS (Freshworks), and digital commerce (Campspot).

Who are some of Sedai's notable customers?

Notable customers include Palo Alto Networks, HP, Experian, KnowBe4, Expedia, CapitalOne Bank, GSK, and Avis. These organizations trust Sedai to optimize their cloud environments and improve operational efficiency.

Implementation, Support & Security

How long does it take to implement Sedai?

Sedai's setup process takes just 5 minutes for general use cases and up to 15 minutes for specific scenarios like AWS Lambda. For complex environments, timelines may vary. Personalized onboarding and a 30-day free trial are available.

How easy is it to get started with Sedai?

Sedai offers plug-and-play implementation, agentless integration via IAM, comprehensive onboarding support, detailed documentation, and a risk-free 30-day free trial. Customers can schedule one-on-one onboarding calls for tailored assistance.

What support resources are available for Sedai customers?

Sedai provides personalized onboarding sessions, a dedicated Customer Success Manager for enterprise customers, detailed documentation, a community Slack channel, and email/phone support for ongoing assistance and troubleshooting.

What security and compliance certifications does Sedai have?

Sedai is SOC 2 certified, demonstrating adherence to stringent security requirements and industry standards for data protection and compliance. Learn more at Sedai Security.

Competition & Differentiation

How does Sedai differ from other cloud optimization platforms?

Sedai is the only platform patented for safe, autonomous optimizations in production. Unlike competitors that make risky, all-at-once changes, Sedai performs slow, gradual optimizations with continuous validation, ensuring no incidents or SLO breaches. It also offers application-aware intelligence, proactive issue resolution, and full-stack cloud coverage.

What unique features set Sedai apart from competitors?

Sedai's unique features include 100% autonomous optimization, patented safety-first execution, proactive issue resolution, application-aware intelligence, release intelligence, full-stack cloud coverage, and plug-and-play implementation. These capabilities address specific use cases and provide a competitive edge.

How does Sedai address the pain points faced by cloud engineering teams?

Sedai eliminates repetitive manual tasks, reduces ticket queues, aligns engineering and cost efficiency goals, and proactively resolves issues. It addresses challenges like operational toil, cost inefficiencies, performance bottlenecks, and complexity in multi-cloud environments.

What advantages does Sedai offer for different user segments?

Platform engineers benefit from reduced toil and IaC consistency; IT/cloud ops teams see lower ticket volumes and safer automation; technology leaders gain measurable ROI and cost savings; FinOps teams get actionable savings and multi-cloud simplicity; SREs experience fewer alerts and automated scaling.

Sedai Logo

Serverless vs. Kubernetes: Is there a right path?

JJ

John Jamie

Content Writer

August 22, 2022

Serverless vs. Kubernetes: Is there a right path?

Featured

Introduction

This article is a summary of an insightful panel discussion on the topic of Serverless versus Kubernetes at Sedai's autocon conference. You can watch the full video here.

The panel consisted of technical and investment professionals who bring a wealth of experience to the discussion. Salil Deshpande, General Partner of Uncorrelated Ventures was the moderator of the panel. Rachit Lohani, formerly the CTO and SVP of Engineering at Paylocity, has an impressive track record with leading organizations such as Atlassian, Intuit, and Netflix. Siddharth Ram, the current CTO and SVP of Engineering at Inflection, possesses extensive experience from his roles at Intuit and Qualcomm. Shridhar Pandey, a Senior Product Manager of Lambda, added his expertise to the conversation. Lastly, we have Kenneth Nguyen, Co-Founder at Tasq, who brought insights from both his startup experience and enterprise background at BP and Shell.

Should All Apps be Serverless?

Salil Deshpande commenced the discussion by emphasizing the chain of logic that suggests most Kubernetes apps should be stateless. This line of reasoning prompts the thought-provoking question: “Why not opt for the simplicity of serverless, specifically Lambda?” The aim is to explore the potential benefits of serverless architectures in terms of streamlining operations and achieving autonomy.

Salil first discussed the intricacies of state management in Kubernetes, drawing from insights from a prominent database company. Three approaches were highlighted:

1. Self-managed state outside of Kubernetes Maximum choice, maximum effort

It’s simple enough to connect applications and databases through declaration. The benefit of running it outside on a traditional VM is that you maximize choice. The downside is that you’re creating infrastructure redundancy and additional Ops work. Rather than automation, you’ll have to manually manage at least five tools already available in Kubernetes (Monitoring, Load Balancing, Configuration, Service Discovery, Logging)

2. Cloud data services outside of Kubernetes - Less Choice, Less Effort

It’s also possible to leverage cloud services to run your Database outside of Kubernetes. Choosing the DbaaS route elimina the need for Ops to manage spinning up, scaling, and managing Database, and they’re not responsible for a redundant infrastructure stack. It’s an external service, but doesn’t add the redundancy of a full Database Stack.

The downside is you’re stuck with the DBaaS as offered by your cloud services provider, which makes even less sense for those running things in house or in prem. And since you don’t have direct access to the infrastructure running the Database, fine-tuning performance and managing compliance can be an issue.

3. Asking Kubernetes to manage state - Cloud-Native Simplicity, Devops Agility

Since its inception, the Kubernetes community has worked tirelessly to solve some of the existential challenges presented by achieving maximum fluidity in a world still mostly run in persistent storage. How do you maintain the seamless flexibility of distributed pods in cases where state demands an application and database stay connected even as pods are added, subtracted, or restarted? Kubernetes provides two native paths to get there. StatefulSets or DaemonSets. StatefulSets assign a unique ID that keeps application and Database containers connected through automation. DaemonSets create a 1:1 relationship where a pod runs on all the nodes of the cluster. When a node is added or removed from a cluster, the pod is also automatically added or removed.

A compelling argument is presented in favor of the first two options, asserting that they enable stateless Kubernetes apps. This led to the critical question of why not transition to serverless solutions like Lambda.

The main point in this panel is to discuss the relationship between serverless and Kubernetes and question the logic behind using Kubernetes for stateless applications. Salil Deshpande states that if most Kubernetes apps are stateless, it would be simpler to use serverless (specifically Lambda) instead. Running stateless systems on Kubernetes adds unnecessary complexity and serverless architectures, such as Lambda, offer a more streamlined solution.

647a47691dd309c88fdd7b4c_9e7e5653.webp

Reconsidering Kubernetes: Advocating for Serverless for Streamlined Business Focus

Siddharth Ram’s perspective on the topic of serverless versus Kubernetes expressed skepticism about taking Kubernetes advice from a database company and questions their understanding of the intricacies of Kubernetes. 

Siddharth highlights his experience at Inflection, where they initially considered moving everything to serverless using Lambda but eventually decided to switch to Kubernetes.

He explained that while they wanted to be cloud-native, they found that using Kubernetes introduced a range of complexities and specific knowledge requirements. They had engineers dedicated to managing the Kubernetes cluster, and the risk of losing an engineer meant finding a replacement with similar expertise. Siddharth didn't want his company to become a container management company.

Based on their needs as an eCommerce company, Siddharth believes that serverless, specifically Lambda, is a better fit for most use cases. He emphasizes the simplicity and value of offloading infrastructure management to a service provider like AWS. By adopting Lambda, they were able to leverage the benefits of serverless architecture and integrate it with their autonomous systems.

647a4769a6d45dd691152427_c9074f8c.webp

Regarding concerns about vendor lock-in, Siddharth doesn't view it as a significant issue. He compares it to other technology choices and sees being locked into AWS as an acceptable trade-off for the advantages gained.

Overall, Siddharth's point is that for most cases, serverless (Lambda) makes sense, and he recommends it to others considering the direction to take. He highlights the value of offloading infrastructure management and focusing on core business logic.

Choosing Between Serverless (Lambda) and Kubernetes: Factors to Consider

Rachit Lohani's main point on the other hand is that the choice between serverless (specifically Lambda) and Kubernetes depends on the specific problem being solved and the desired outcomes. 

647a4769ea785003002db80e_6818374e.webp

He agrees that most Kubernetes apps should be stateless and that serverless architectures like Lambda are simpler and provide autonomy. However, he argues that Kubernetes can also be autonomous and states that both serverless and Kubernetes require similar efforts to achieve autonomy. According to Lohani, the decision to use Kubernetes or serverless depends on factors such as the need for high throughput or low latency, the level of engineering investment required, and the ability to abstract complexity from developers. While his organization has mostly converted to serverless, there are still applications running on Kubernetes due to the existing engineering investment and the satisfactory performance of those applications.

Serverless vs. Kubernetes: Simplicity, Autonomy, and Infrastructure Management in Focus

Shridhar Pandey, the Senior Product Manager of Lambda, shares his perspective on the topic of Serverless versus Kubernetes. He supports the idea that most Kubernetes applications should be stateless and argues that if that is the case, it is simpler to use Lambda or Serverless functions instead. He highlights that the first two options presented in the white paper of a popular database company, where the state is managed outside of Kubernetes or using a cloud data service, align with the stateless approach. According to Shridhar, moving to Serverless, particularly Lambda, allows for greater autonomy and offloading the infrastructure management to a cloud provider like AWS.

Overall, Shridhar Pandey's point in the panel is that for stateless applications, using Serverless options like Lambda can provide simplicity, autonomy, and a reduction in infrastructure management, while acknowledging that there may be exceptions and engineering considerations that influence the choice between Serverless and Kubernetes.

647a47699bc64d3c79c2543d_fade0f70.webp

Startups: Benefits, Use Cases, and Vendor Lock-in

Kenneth Nguyen's point in this panel is that he believes most Kubernetes apps should be stateless and that using serverless technologies like Lambda would be simpler and more autonomous. He supports this by referencing a popular database company's white paper, which suggests running databases outside of Kubernetes or using a cloud data service, making the Kubernetes app stateless. Kenneth argues that the complexity of managing stateful applications in Kubernetes can be avoided by adopting serverless architectures. He mentions the experience of moving to serverless with his own company, Tasq, and the benefits it brought, including increased autonomy. 

647a47692b22e5af614c1d76_a8cdeb48.webp

He also addresses concerns about vendor lock-in, stating that being locked into a specific cloud provider like AWS is acceptable given the other technology choices made in software development.

Architecture Comparison: Kubernetes vs Serverless Containers vs Serverless Functions

647a47699d710f2402878f2c_f656cb54.webp

Conclusion

The panelists explored a comparison chart that assessed the operational burden of Kubernetes versus Serverless functions. While there is some disagreement on the specific weights assigned to each factor, the panelists generally agree that minimizing operational burden is crucial. They highlighted the importance of considering the value-to-cost ratio, with operational burden taking precedence over cloud cost. The consensus is that technologies like AWS Lambda provide significant value by reducing operational burden, particularly for startups and ventures focused on product-market fit. As businesses scale, Kubernetes becomes a more viable choice due to increased complexity and the need for dedicated teams to manage workloads.