Frequently Asked Questions

Serverless Architecture & Velocity Global's Approach

What is Velocity Global's approach to serverless development environments?

Velocity Global has adopted a serverless-first architecture to optimize its development environments for efficiency, scalability, and cost-effectiveness. By leveraging platforms like AWS Lambda, they focus on usage-based billing, automatic scaling, and reduced operational overhead, allowing developers to concentrate on building business logic rather than managing infrastructure.

Why did Velocity Global choose a serverless-first architecture over Kubernetes?

Velocity Global selected a serverless-first approach to better align with their fluctuating workload demands. Serverless computing offers cost savings by charging only for actual usage, automatic scaling, and reduced complexity compared to managing Kubernetes clusters. This enables their teams to handle variable demand efficiently and focus on rapid feature delivery.

What are the main benefits of using serverless architecture for development environments?

The main benefits include usage-based billing (paying only for resources consumed), simplified infrastructure management, automatic scaling to handle traffic spikes, and improved developer productivity by abstracting infrastructure concerns. This leads to faster deployment cycles and more predictable costs.

How does serverless architecture help with cost optimization at Velocity Global?

Serverless architecture enables Velocity Global to optimize costs by charging only for actual resource usage, rather than maintaining servers 24/7. This is especially beneficial for fluctuating workloads, as costs are minimal during periods of low activity and scale automatically with demand.

What challenges are associated with serverless development environments?

Common challenges include cold starts (initial latency when a function is invoked after being idle), AWS Lambda's 15-minute execution limit, managing East-West traffic and event-driven orchestration, and difficulties in replicating production environments for local testing.

How does Velocity Global address cold starts in serverless environments?

Velocity Global recognizes cold starts as a source of latency in serverless environments. They monitor performance using tools like DataDog and optimize function configurations to minimize cold start impact, especially for latency-sensitive applications.

What is the "Paved Roads" framework at Velocity Global?

The "Paved Roads" framework is a standardized, pre-built development environment that provides templates, pipelines, and shared libraries for authentication, authorization, and routing. It streamlines setup, testing, and deployment, allowing developers to focus on delivering business value while DevOps maintains the underlying infrastructure.

How does the Paved Roads framework improve developer productivity?

By providing pre-built infrastructure, shared libraries, and automated pipelines, the Paved Roads framework allows developers to start coding immediately and reduces the need for manual setup. This leads to faster onboarding, fewer errors, and more time spent on feature development.

What programming languages are supported by Velocity Global's Paved Roads?

The Paved Roads framework at Velocity Global supports Node.js, TypeScript, Java, and Python. Teams using other technologies must build and maintain their own scaffolding outside the approved framework.

How does Velocity Global monitor and optimize serverless functions in production?

Velocity Global uses tools like DataDog for performance visibility, cost tagging to track resource usage, and real-time dashboards to monitor behavior and performance. Each team sets up their own alarms and on-call rotations to ensure rapid response to issues.

What metrics does Velocity Global use to measure the effectiveness of its development environment?

Key metrics include Developer Net Promoter Score (NPS), DORA metrics (measuring software delivery performance), and the level of automation indicated by minimal DevOps/QA involvement in production releases.

How does Velocity Global handle CI/CD and deployment automation?

Velocity Global's deployment process is fully integrated into their CI/CD pipeline, which automates tasks such as linting, unit testing, static and dynamic scans, and deployment. This ensures only clean, tested code is pushed to production, supporting reliability and speed.

What are the main challenges of testing in serverless environments?

Testing in serverless environments is challenging because it's difficult to replicate all infrastructure dependencies locally. Mocking services can introduce inconsistencies, and testing in the cloud can lead to long deployment cycles, slowing down iterative development.

How does Velocity Global balance cost and performance in serverless environments?

Velocity Global uses cost tagging, utilization dashboards, and performance metrics to monitor and balance cost efficiency with performance. They avoid over-optimizing for performance, which can increase costs, by tracking real-time usage and adjusting configurations as needed.

What tools does Velocity Global use for infrastructure management and deployment?

Velocity Global uses tools such as AWS CloudFormation, AWS CDK, AWS SAM, and the Serverless Framework to manage infrastructure and deployments. Each tool has its own learning curve and is chosen based on team needs and project requirements.

How does Velocity Global ensure consistency and reliability in its development process?

By implementing the Paved Roads framework, Velocity Global standardizes development processes, tools, and supported languages. This ensures consistency, reliability, and efficiency across teams, with DevOps acting as the product owner for the framework.

What is the ultimate goal for Velocity Global's serverless optimization?

The ultimate goal is to achieve hands-free optimization, where autonomous tools automatically manage provisioned concurrency and performance concerns, further reducing the cognitive load on developers and enabling them to focus on business logic.

What lessons can other organizations learn from Velocity Global's serverless journey?

Organizations can learn the value of usage-based billing, the importance of abstracting infrastructure for developer productivity, strategies for overcoming serverless challenges, and the benefits of standardized frameworks like Paved Roads for consistency and efficiency.

How does Sedai's autonomous cloud management platform complement serverless strategies like Velocity Global's?

Sedai's autonomous cloud management platform optimizes cloud resources for cost, performance, and availability using machine learning. It can further enhance serverless strategies by automating routine tasks, proactively resolving issues, and providing full-stack coverage across AWS, Azure, GCP, and Kubernetes environments. Learn more.

Features & Capabilities

What features does Sedai offer for cloud optimization?

Sedai offers autonomous optimization, proactive issue resolution, full-stack cloud coverage, smart SLOs, release intelligence, plug-and-play implementation, multiple modes of operation (Datapilot, Copilot, Autopilot), enhanced productivity, and safety-by-design. These features help reduce costs, improve performance, and ensure reliability. Source.

Does Sedai support integration with monitoring and CI/CD tools?

Yes, Sedai integrates with monitoring tools like Cloudwatch, Prometheus, Datadog, and Azure Monitor; CI/CD and IaC tools like GitLab, GitHub, Bitbucket, and Terraform; ITSM tools like ServiceNow and Jira; and notification platforms like Slack and Microsoft Teams. Source.

How does Sedai's autonomous optimization work?

Sedai uses machine learning to optimize cloud resources for cost, performance, and availability without manual intervention. It continuously learns from interactions and outcomes to improve its optimization and decision models over time. Source.

What is Sedai for S3 and what does it do?

Sedai for S3 optimizes Amazon S3 costs by managing Intelligent-Tiering and Archive Access Tier selection. It can achieve up to 30% cost efficiency gain and 3X productivity gain by reducing manual effort in S3 management. Source.

What is Release Intelligence in Sedai?

Release Intelligence is a feature in Sedai that tracks changes in cost, latency, and errors for each deployment. This helps improve release quality and minimize risks during deployments. Source.

What are the modes of operation in Sedai?

Sedai offers three modes of operation: Datapilot (observability), Copilot (one-click optimizations), and Autopilot (fully autonomous execution of optimizations). This provides flexibility to match different operational needs. Source.

How does Sedai ensure safe and auditable changes?

Sedai integrates with Infrastructure as Code (IaC), IT Service Management (ITSM), and compliance workflows to ensure all changes are safe, validated, and auditable. It also provides automatic rollbacks and incremental changes for risk-free automation. Source.

Use Cases & Business Impact

What business impact can Sedai deliver?

Sedai can reduce cloud costs by up to 50%, improve application performance by reducing latency by up to 75%, deliver up to 6X productivity gains, and reduce failed customer interactions by up to 50%. For example, Palo Alto Networks saved $3.5 million and KnowBe4 achieved 50% cost savings in production. Source.

Who can benefit from using Sedai?

Sedai is designed for platform engineers, IT/cloud operations, technology leaders, site reliability engineers (SREs), and FinOps professionals in organizations with significant cloud operations across industries such as cybersecurity, IT, financial services, healthcare, travel, and e-commerce. Source.

What core problems does Sedai solve for cloud teams?

Sedai addresses cost inefficiencies, operational toil, performance and latency issues, lack of proactive issue resolution, complexity in multi-cloud and hybrid environments, and misaligned priorities between engineering and FinOps teams. Source.

What industries are represented in Sedai's case studies?

Sedai's case studies cover industries such as cybersecurity (Palo Alto Networks), IT (HP), financial services (Experian, CapitalOne Bank), security awareness training (KnowBe4), travel and hospitality (Expedia), healthcare (GSK), car rental services (Avis), retail and e-commerce (Belcorp), SaaS (Freshworks), and digital commerce (Campspot). Source.

Can you share specific customer success stories with Sedai?

Yes. KnowBe4 achieved up to 50% cost savings and saved $1.2 million on AWS bills. Palo Alto Networks saved $3.5 million, reduced Kubernetes costs by 46%, and saved 7,500 engineering hours. Belcorp reduced AWS Lambda latency by 77%. KnowBe4 case study, Palo Alto Networks case study.

Implementation, Support & Security

How long does it take to implement Sedai?

Sedai's setup process takes just 5 minutes for general use cases and up to 15 minutes for specific scenarios like AWS Lambda. More complex environments may require additional time. Source.

How easy is it to get started with Sedai?

Sedai offers plug-and-play implementation, agentless integration via IAM, personalized onboarding sessions, a dedicated Customer Success Manager for enterprise customers, detailed documentation, and a 30-day free trial. Source.

What support resources are available for Sedai users?

Sedai provides detailed technical documentation, a community Slack channel, email and phone support, and one-on-one onboarding calls with the engineering team. Documentation.

What security and compliance certifications does Sedai have?

Sedai is SOC 2 certified, demonstrating adherence to stringent security requirements and industry standards for data protection and compliance. Security page.

Competition & Differentiation

How does Sedai differ from other cloud optimization tools?

Sedai offers 100% autonomous optimization, proactive issue resolution, application-aware intelligence, full-stack cloud coverage, unique release intelligence, and a quick setup process. Unlike competitors that rely on static rules or manual adjustments, Sedai operates autonomously and holistically. Source.

What unique features set Sedai apart from competitors?

Unique features include 100% autonomous optimization, proactive issue resolution before user impact, application-aware intelligence, full-stack coverage, release intelligence, and plug-and-play implementation. These features address specific use cases and provide a competitive edge. Source.

What advantages does Sedai provide for different user segments?

Platform engineers benefit from reduced toil and IaC consistency; IT/cloud ops teams see lower ticket volumes and safer automation; technology leaders gain measurable ROI and cost savings; FinOps teams align engineering and cost goals; SREs experience fewer SLO breaches and less pager fatigue. Source.

Sedai Logo

Serverless Development Environments: Velocity Global's Path to Scalability

NG

Nikhil Gopinath

Content Writer

October 9, 2024

This talk is based on insights shared by Kumar Ramanathan of Velocity Global at Sedai’s autocon conference.  From here on we’ll cover Kumar’s story.  This is an edited version of his talk.  Learn more here about the talk including a comparison of serverless and Kubernetes approaches.

Introduction

Companies are constantly seeking innovative ways to optimize their development environments for efficiency, scalability, and cost-effectiveness. Velocity Global, a leading employer of record (EOR) platform, has embraced a serverless-first architecture to address these challenges. 

This article explores our unique approach to creating a streamlined, scalable development environment using serverless technologies. we'll delve into how this approach simplifies infrastructure management, enhances developer productivity, and provides the flexibility needed to handle fluctuating workloads. 

By examining Velocity Global's strategies, including our "Paved Roads" concept, I hope to uncover valuable lessons for organizations looking to leverage serverless architecture in their own development processes.

Velocity Global’s Serverless Approach: Simplicity and Scalability

Velocity Global, a leading employer of record (EOR) platform that simplifies global hiring and workforce management, has adopted a serverless-first architecture to optimize its infrastructure for simplicity and cost-effectiveness. 

Unlike Kubernetes users (such as BILL, who development environment we cover here),  Velocity Global has chosen serverless computing to align with its fluctuating workload demands. This approach, where costs are based on actual usage rather than fixed server times, is ideal for a company like Velocity Global, where demand varies throughout the day or month. With platforms like AWS Lambda, the company benefits from automatic scaling, availability, and real-time resource management. This means their developers can focus on building business logic and new features, while the cloud provider manages the infrastructure. For Velocity Global, Serverless computing significantly reduces operational overhead, accelerates development, and provides a predictable cost structure since they only pay for the resources they use, eliminating the need to maintain servers 24/7.

Advantages of Velocity Global’s Serverless Approach

Velocity Global’s serverless approach delivers key benefits that streamline infrastructure management and improve cost efficiency. Below are some of the most impactful advantages of adopting this model:

  • Usage-Based Billing: One of the most significant benefits of serverless is that businesses are billed based on the actual resources consumed rather than how long the infrastructure has been running. This means that when no activity is occurring, the costs are minimal.
  • Better Time-to-Value: Serverless architecture abstracts infrastructure concerns, enabling developers to focus purely on application logic. This leads to faster deployment cycles and quicker delivery of value.
  • Simplified Infrastructure Management: Since the platform (like AWS) handles scaling and availability, developers don’t need to worry about manually managing containers or orchestrating server clusters.
  • Scalability Without Complexity: Serverless systems scale automatically based on the workload, allowing companies like Velocity Global to handle traffic spikes seamlessly without the need for manual intervention.
670698626b9fd3cff33051c5_AD_4nXf_96E_1v9Z6GNN2ep9hF17cvioaFsyh756DkY4cGy5bfCupy45tmAsRC885wvFaeQfAZGnT3-slaFPSMWNr9gFkq6EcbalnboYDR_9_WLYAalt3_dGneJ9CjflsULdyxXe18gmb7808kLJDQUOON6xSxE.webp

However, adopting a serverless-first approach is not without its challenges. Developers working in serverless environments often encounter cold starts, which can lead to initial latency when a serverless function is invoked after being idle. Additionally, there are execution limits with AWS Lambda (such as a 15-minute execution limit) that developers must navigate. Deciding on the right granularity for serverless functions is another design-time challenge; for example, figuring out whether to create multiple smaller functions or fewer, larger ones.

Overcoming Challenges with Serverless Development

While serverless-first architecture offers simplicity and cost-efficiency, it also introduces challenges that require careful consideration for smooth development:

  • Cold starts: Idle serverless functions experience latency when invoked for the first time after being dormant. This phenomenon, known as cold starts, can lead to delays, affecting user experience, especially in latency-sensitive applications.
  • Execution limit: AWS Lambda enforces a 15-minute execution limit, compelling developers to reconsider the granularity of their functions. They must decide between creating many small functions or building larger, more complex functions to handle longer processes, balancing simplicity and performance.
  • East-West traffic and event-driven orchestration: Serverless architectures, being event-driven, introduce challenges in managing East-West traffic (traffic between services) and handling event-driven orchestration. Developers must account for distributed systems and eventual consistency models, which differ from traditional systems, to prevent bottlenecks and ensure consistent behavior.

These challenges highlight the complexities that come with adopting a serverless architecture. Developers must carefully weigh the trade-offs between simplicity, performance, and system consistency to ensure an efficient and reliable solution.

Overcoming Development Challenges: Testing and Deployment

Testing in serverless environments presents unique difficulties. Developers often struggle to bring all their application’s infrastructure dependencies (like queues, gateways, and databases) into local development environments. While mocking these services can help, it is not always accurate and can introduce inconsistencies between local tests and production environments.

In response, some developers opt to test in the cloud, deploying their code to a live environment to see how it behaves. However, this approach can lead to long deployment cycles, slowing down development when multiple iterations are needed. Developers often want to "make a change, test it, make another change, and test again," but lengthy deployment times can make this iterative process inefficient.

In terms of deployment, tools such as CloudFormation, CDK, SAM, and the Serverless Framework are available to manage infrastructure, but each comes with its own learning curve. For small, autonomous development teams, these tools can be overwhelming and distract from the core task of solving business problems and delivering features quickly.

Monitoring and Optimizing in Production

Monitoring serverless functions in production adds complexity due to their ephemeral nature. These functions spin up and down as needed, making it difficult to track long-running issues. Tools like DataDog provide visibility, but latency, cold starts, and performance must still be carefully monitored to ensure smooth operation.

A major concern in serverless environments is managing costs. While billed based on usage, over-optimizing for performance can increase costs. Velocity Global tackles this by using cost tagging, utilization dashboards, and performance metrics to balance cost efficiency and performance.

Key methods include DataDog for performance visibility, cost tagging to track resource usage, and real-time dashboards for monitoring behavior and performance.

Paved Roads: Streamlining Development with Pre-Built Infrastructure

Velocity Global's implementation of the Paved Roads concept provides a streamlined development environment that removes much of the complexity of setting up and managing infrastructure. Inspired by Netflix, this framework allows developers to focus solely on solving business problems through code, as much of the scaffolding, monitoring, and testing environments are pre-built and maintained by the DevOps team.

6706986211091821379dd09f_AD_4nXeLUoQbwjRXsmzxdqrT-W3OHaFrtxrSMGYWf7EOyVuLJDr5OjXvW68aNLv4RkAG74D2yn2alznaZosS6wSrndsi7jg_09QEV12pvwrT2DmryGn6mOxbXOq6VbECufJLeaeaIiFqVk_iMnN_MfPyiDfu-1TW.webp

Here is a deeper understanding of Velocity Global’s Paved Road framework reveals how it simplifies development by providing pre-built infrastructure and a structured approach to building, testing, and deploying software.

Paved Roads as a Product: The Paved Road is presented as a "product" that takes care of all scaffolding, enabling engineers to concentrate on delivering business value. DevOps operates as the product owner, and the internal engineering teams act as the customers of this product.

One Approved Way: There is a single approved approach to building, testing, deploying, and monitoring software at Velocity Global, which supports specific programming languages (Node.js, TypeScript, Java, Python). If teams use any other technology, they are "off-road," meaning they must build and maintain their own scaffolding.

Shared Libraries and Components: Libraries for essential features like authentication and authorization (AuthN/AuthZ) are included in the Paved Roads framework, making it easier for developers to implement these crucial aspects of an application.

Pre-Built Templates and Pipelines: Developers are provided with templates and pipelines that allow for automated workflows and task implementation, streamlining the entire development process from infrastructure setup to deployment.

Key Metrics to Measure Effectiveness:

  • Developer NPS (Net Promoter Score): Reflects how likely developers are to recommend the Paved Road approach.
  • DORA Metrics: Measures the performance and success of software delivery processes.
  • No DevOps/QA Involvement in Production Releases: An indicator of the level of automation and reliability, with minimal manual intervention needed during releases.

Here are the benefits of the Paved Road:

  • Faster Setup: Developers can start writing code immediately, with much of the infrastructure already in place.
  • Pre-Built Components: Common services like authentication and routing are pre-configured.
  • Incentive to Stay On-Track: By limiting supported languages, the Paved Road encourages developers to use approved technologies, streamlining the development process.

Deploying with CI/CD Pipelines

The deployment process at Velocity Global is fully integrated into their CI/CD pipeline, which automates the deployment workflow. The pipeline includes several key tasks, such as linting, unit testing, scans (both static and dynamic), and ultimately deployment. This ensures that only clean, thoroughly tested code is pushed to production.

67069862b6541e592ac33e26_AD_4nXeANmi1rvruFVRGhu_u0dwpCQZg6aY4DMW_EKDMqy6FFpoONErEiSoZCmI2PdnMvGqFTBe7NftUU4HrbSvJcj8h32EHT2CTvaG16fonCGwt2Voau0QsMDUtMXFN9wScbMVooVIDC7gIN-2qoIesw_Q9H0G7.webp

Once deployed, teams rely on DataDog to monitor the performance of their services. Each team sets up their own alarms and on-call rotations to ensure they can respond to any issues in real time. Cost tagging and performance dashboards further help teams optimize both cost and performance.

The ultimate goal for Velocity Global is to move towards hands-free optimization, where autonomous tools automatically manage provisioned concurrency and other performance concerns, reducing the cognitive load on developers.

Summary

Velocity Global's adoption of a serverless-first architecture demonstrates the potential of this approach to dramatically simplify infrastructure management while enhancing scalability and cost-efficiency. By leveraging services like AWS Lambda, the company has created a development environment that allows their teams to focus on building business logic and delivering features, rather than managing underlying infrastructure.

Key takeaways from Velocity Global's approach include:

  1. The benefits of usage-based billing in optimizing costs for fluctuating workloads.
  2. The importance of abstracting infrastructure concerns to improve developer productivity and time-to-value.
  3. The challenges of serverless development, including cold starts and execution limits, and strategies to address them.
  4. The value of implementing a "Paved Roads" framework to standardize development processes and tools.
  5. The crucial role of robust monitoring and cost optimization in managing serverless environments effectively.

Velocity Global's implementation of the Paved Roads concept, coupled with their focus on CI/CD integration and performance monitoring, showcases a comprehensive approach to serverless development. This strategy not only streamlines the development process but also ensures consistency, reliability, and efficiency across their platform.

While serverless architecture presents its own set of challenges, particularly in areas like testing and deployment, Velocity Global's experience illustrates how these can be effectively managed through careful planning and the right tools. Their journey towards hands-free optimization points to an exciting future where autonomous tools further reduce the cognitive load on developers.

For organizations considering a shift to serverless architecture or looking to optimize their existing serverless environments, Velocity Global's approach offers valuable insights into balancing simplicity, scalability, and cost-effectiveness in modern cloud development.