Frequently Asked Questions

Panel Insights & Autonomous Operations

What were the main themes discussed in the panel on transforming operations with AI and autonomous systems?

The panel at autocon, hosted by Sedai, focused on infrastructure readiness for automation, the need for adaptable frameworks to assess automation readiness, incremental implementation of autonomy, the business case for AI in operations, risk management strategies, and future opportunities in autonomous technologies such as Generative AI and MLOps. Panelists emphasized that successful adoption requires both technological maturity and cultural alignment within organizations.

How should companies assess their readiness for adopting autonomous technologies?

Companies should evaluate both their technological maturity and organizational culture. Readiness involves having robust Infrastructure as Code (IaC), advanced observability, and a willingness to embrace change. As highlighted by panelists from Geodis and Paylocity, readiness is not just about technology but also about people and processes. Consulting with experts like Sedai can help uncover gaps and prepare for successful automation adoption.

What frameworks exist for assessing automation readiness?

Panelists referenced the automotive industry's L0-L5 maturity model as a useful analogy for automation readiness. While there is no universal framework, companies can use maturity levels (from no automation to full autonomy) to benchmark progress. The consensus was that frameworks must be adaptable to industry and client needs, and that collaboration with solution providers like Sedai is key to tailoring the approach.

How did KnowBe4 approach automation and what role did Sedai play?

KnowBe4's SRE team began by cleaning up their infrastructure and adopting Infrastructure as Code (IaC). This foundation made it easy to implement Sedai, allowing them to move from 0% to 100% automation in months. They started with targeted services, validated results, and then expanded Sedai's use across environments, achieving cost savings and operational improvements without incidents.

What are the risks of implementing autonomous systems and how can they be mitigated?

Risks include potential downtime, configuration errors, and cultural resistance. Panelists recommended starting with low-risk environments, using incremental rollouts, and establishing strong guardrails. Sedai's patented safety-first approach ensures all optimizations are gradual, continuously validated, and reversible, minimizing risk of incidents or SLO breaches.

How can organizations incrementally implement autonomy in their operations?

Organizations should start with pilot projects in non-production or low-risk environments, validate outcomes, and gradually expand. KnowBe4, for example, began with a single service, monitored results, and then scaled Sedai's autonomous optimization across their stack. This approach allows for effective risk management and builds confidence in autonomous systems.

What non-financial gains can be achieved through autonomy and AI in operations?

Non-financial gains include increased innovation, improved employee satisfaction, faster onboarding, and the ability to reallocate resources from maintenance to growth. For example, Wipro reduced onboarding time from 7-10 days to a few hours, and Netflix increased experimentation by 6000% through autonomous systems, driving business agility and customer satisfaction.

How does automation enable business growth beyond cost savings?

Automation allows organizations to redirect resources from routine operations to innovation and growth initiatives. As shared by Sisu, automation enabled rapid deployment for new customers, while Wipro reinvested savings into core AI platform development. This shift accelerates time-to-market and supports strategic expansion.

What is the significance of L6 in automation maturity, and how does Sedai fit in?

L6 represents a stage where AI-driven systems autonomously discover and resolve issues beyond human foresight. Panelists noted that Sedai acts as an accelerant, helping organizations leap from basic automation to advanced, self-optimizing operations. Sedai's continuous learning and safety-first design make it suitable for organizations aiming for L6 maturity.

What are the opportunities for autonomous systems in GenAI and MLOps?

Panelists identified significant opportunities for autonomous systems in data preparation, model training, and inference optimization. Sedai's expansion into data platform optimization can help reduce costs and improve efficiency in GenAI and MLOps workflows, addressing challenges like GPU utilization and model serving costs.

How do industry differences affect automation readiness and adoption?

Industry context matters: for example, manufacturing and supply chain environments may have physical constraints that limit automation options compared to software-centric industries. Panelists stressed the need for adaptable frameworks and tools like Sedai that can address diverse operational realities.

What role does culture play in successful automation and AI adoption?

Cultural alignment is as important as technological readiness. Successful transformation requires buy-in from teams, clear communication, and a willingness to embrace new ways of working. As seen in PayPal and Facebook's journeys, bringing people along is essential for sustainable automation success.

How does Sedai ensure safety in autonomous cloud optimization?

Sedai is the only cloud optimization platform patented for safe, autonomous optimizations in production. It performs gradual, incremental changes with continuous validation checks, ensuring no incidents or SLO breaches. This safety-first approach differentiates Sedai from other optimizers that may risk all-at-once changes.

What is Sedai and what does it do?

Sedai is an autonomous cloud management platform that optimizes cloud resources for cost, performance, and availability using machine learning. It eliminates manual intervention, reduces cloud costs by up to 50%, improves performance by reducing latency up to 75%, and proactively resolves issues before they impact users. Sedai supports AWS, Azure, GCP, and Kubernetes environments. Learn more.

What are Sedai's core features and capabilities?

Sedai offers autonomous optimization, proactive issue resolution, full-stack cloud coverage, smart SLOs, release intelligence, plug-and-play implementation, multiple modes of operation (Datapilot, Copilot, Autopilot), enhanced productivity, and safety-by-design. These features deliver cost savings, performance improvements, and operational efficiency. See full details.

How does Sedai compare to other cloud optimization platforms?

Sedai differentiates itself with patented, safety-first autonomous optimization, proactive issue resolution, application-aware intelligence, and full-stack coverage. Unlike competitors that rely on static rules or manual adjustments, Sedai continuously learns and optimizes based on real application behavior, ensuring safe, incremental changes and measurable business outcomes.

What problems does Sedai solve for cloud operations teams?

Sedai addresses cost inefficiencies, operational toil, performance and latency issues, lack of proactive issue resolution, complexity in multi-cloud environments, and misaligned priorities between engineering and FinOps teams. It automates routine tasks, reduces costs, improves reliability, and aligns business and technical goals. Read more.

Who can benefit from using Sedai?

Sedai is designed for platform engineers, IT/cloud operations, technology leaders (CTO, CIO, VP Engineering), site reliability engineers (SREs), and FinOps professionals. It is ideal for organizations with significant cloud operations across industries such as cybersecurity, IT, financial services, healthcare, travel, and e-commerce. See case studies.

What business impact can Sedai deliver?

Sedai delivers up to 50% cloud cost savings, 75% latency reduction, 6X productivity gains, and up to 50% fewer failed customer interactions. Customers like Palo Alto Networks saved $3.5 million, KnowBe4 achieved 50% cost savings, and Belcorp reduced AWS Lambda latency by 77%. Explore success stories.

How long does it take to implement Sedai?

Sedai's plug-and-play implementation takes just 5 minutes for general use cases and up to 15 minutes for scenarios like AWS Lambda. The platform connects securely via IAM, requires no agents, and offers personalized onboarding support. Get started here.

What integrations does Sedai support?

Sedai integrates with monitoring and APM tools (Cloudwatch, Prometheus, Datadog, Azure Monitor), Kubernetes autoscalers (HPA/VPA, Karpenter), IaC and CI/CD tools (GitLab, GitHub, Bitbucket, Terraform), ITSM (ServiceNow, Jira), notification tools (Slack, Microsoft Teams), and various runbook automation platforms. See all integrations.

What security and compliance certifications does Sedai have?

Sedai is SOC 2 certified, demonstrating adherence to stringent security and compliance standards for data protection. Learn more about Sedai's security.

What technical documentation and resources are available for Sedai?

Sedai provides detailed technical documentation, case studies, datasheets, and strategic guides. Access the documentation at docs.sedai.io/get-started and additional resources at sedai.io/resources.

What feedback have customers given about Sedai's ease of use?

Customers praise Sedai's quick setup (5–15 minutes), agentless integration, personalized onboarding, and extensive support resources. The 30-day free trial and dedicated Customer Success Manager for enterprise clients make adoption smooth and risk-free. See more.

What industries does Sedai serve?

Sedai serves cybersecurity, IT, financial services, security awareness training, travel and hospitality, healthcare, car rental, retail and e-commerce, SaaS, and digital commerce. Customers include Palo Alto Networks, HP, Experian, KnowBe4, Expedia, CapitalOne, GSK, and Avis. See all case studies.

Can you share specific customer success stories with Sedai?

Yes. KnowBe4 achieved 50% cost savings and saved $1.2 million on AWS. Palo Alto Networks saved $3.5 million and reduced Kubernetes costs by 46%. Belcorp reduced AWS Lambda latency by 77%. See more at sedai.io/resources.

What makes Sedai's approach to cloud optimization unique?

Sedai's patented, safety-first, autonomous optimization is unique. It makes gradual, validated changes, never causing incidents or SLO breaches. Sedai also offers application-aware intelligence, proactive issue resolution, and release intelligence, setting it apart from traditional, rule-based optimizers.

How does Sedai support safe and auditable changes in enterprise environments?

Sedai integrates with Infrastructure as Code (IaC), IT Service Management (ITSM), and compliance workflows, ensuring all changes are safe, auditable, and reversible. This supports enterprise governance and regulatory requirements.

What modes of operation does Sedai offer?

Sedai offers three modes: Datapilot (observability), Copilot (one-click optimizations), and Autopilot (fully autonomous execution). This flexibility allows organizations to choose the right level of automation for their needs.

How does Sedai continuously improve its optimization models?

Sedai continuously learns from interactions and outcomes, evolving its optimization and decision models over time. This ensures ongoing improvements in cost efficiency, performance, and reliability for customers.

How does Sedai help with release quality and risk management?

Sedai's Release Intelligence tracks changes in cost, latency, and errors for each deployment, ensuring smoother releases and minimizing risks. This proactive approach improves release quality and reduces the likelihood of incidents.

What pain points does Sedai address for FinOps teams?

FinOps teams benefit from Sedai's actionable savings, multi-cloud complexity management, alignment of engineering and cost efficiency goals, and rapid anomaly detection. Sedai automates manual tuning and bridges the gap between visibility and action.

How does Sedai support hybrid and multi-cloud environments?

Sedai provides full-stack optimization across AWS, Azure, GCP, and Kubernetes, handling hybrid complexity and ensuring consistent guardrails and processes across diverse environments.

How does Sedai help reduce operational toil for engineering teams?

Sedai automates repetitive tasks such as capacity tweaks, scaling policies, and configuration management, delivering up to 6X productivity gains and allowing engineers to focus on innovation.

What is the primary purpose of Sedai's platform?

Sedai's primary purpose is to eliminate toil for engineers by automating cloud management, enabling teams to focus on impactful work and innovation rather than manual optimizations. Learn more about Sedai's mission.

Sedai Logo

Panel: Transforming Operations with AI & Autonomous Systems

S

Sedai

Content Writer

October 11, 2024

Panel: Transforming Operations with AI & Autonomous Systems

Featured

At autocon, we hosted a panel talk on transforming operations with AI & Autonomous Systems.  The panelists shared insights based on their diverse experiences, emphasizing the necessity for a strategic approach to integrating autonomous systems within existing infrastructures.  Here are the top themes:

  • Infrastructure Readiness for Automation: Companies must evaluate their existing infrastructure to determine readiness for adopting autonomous technologies. Key factors include technological maturity and cultural alignment within the organization.
  • Frameworks for Automation Readiness: There is a call for standardized frameworks to assess automation readiness, though panelists noted that any such framework must be adaptable to specific industry needs and client requirements.
  • Incremental Implementation of Autonomy: A gradual approach to automation was favored, with suggestions to start small and scale up. This allows for effective risk management while gradually enhancing operational capabilities.
  • Business Case for AI in Operations: Panelists highlighted that automation is not just a cost-saving measure but also a driver for innovation, allowing organizations to redirect resources towards growth and enhance employee satisfaction.
  • Risk Management in Autonomous Systems: Implementing autonomous technologies involves inherent risks. The discussion stressed the importance of establishing guardrails and mitigation strategies to ensure safe and effective integration.
  • Future Opportunities in Autonomous Technologies: The panel looked forward to advancements in AI, especially in areas like Generative AI and MLOps, suggesting that these could transform how businesses manage data and resources, presenting new avenues for efficiency and cost optimization.

Panel Introduction and Company Scale:

The panel comprised of some of the brightest minds in IT infrastructure, including:

  • Tim Guleri, Managing Director of Sierra Ventures and also the Moderator of the Panel
  • Subha Tatavarti, the global CTO of Wipro. Wipro manages their clients’ public and private cloud environments, especially in the retail, IoT and oil and gas sectors. They serve 1,500+ customers and 25+ verticals. In terms of fleet size, they touch over a million.
  • Rachit Lohani leads product and technology for Paylocity, a payroll company. In terms of their technical footprint, they manage around 600-700 services, powered by 30,000 to 40,000 machines.
  • Shibu Raj works as a SVP IT for Geodis, a third party supply chain company out of France. He is responsible for platform modernization and platform tools for Geodis Americas from North to South, including LATAM. Shibu was also the first customer of Sedai. 
  • Jigar Desai is working as SVP Product and Engineering for Sisu, a startup company that does data insights. Jigar comes from Facebook so he has seen millions of machines and how to manage them. 
  • Matthew Duren (Matt) is Director Platform Engineering at KnowBe4, the provider of the world's largest security awareness training and simulated phishing platform. KnowBe4 has over 60,000 customers and runs from 2000 to 4000 containers every day, about 350 million lambda invocations, and growing every single month.
  • Mohamed Khalid (Mo) is the Director Enterprise Architect of GlaxoSmithKline (GSK), a leading biopharma company that makes general medicines, speciality medicines, and vaccines. It spans over 80+ countries and aims to impact the health of 2.5 billion people by the end of 2030.

Assessing Infrastructure Readiness for Autonomous Technology

Tim (Sierra Ventures): How should anybody who is just coming into the notion of autonomous be thinking about the readiness equation of their infrastructure?

How Geodis Made Itself Ready for Automation:

Shibu (Geodis): A transformation doesn't come from a vacuum. It has to be thought out end to end. When we started our journey of automation, AI processes were there, but our first question was “Are we ready to take it and reap the full benefit of it?”

We also had another equation in the bundle: Where will our money be invested? Our core business is supply chain optimization. In the supply chain, automation is a first class citizen as you have robots picking and packing. Since automation was in our DNA, we knew we had to do automation in this area because only then the customer gets the full value.

We quickly realized that we were not mature enough to adopt and get the benefit. We could invest in it because it's a newer and cooler technology but that dollar spent would be a waste.

Talking with people in Sedai helped us uncover some of the things we are not ready for. We quickly realized that we had to invest here, either by partnering with people who were already into it, or by building it ourselves. That was a quick realization for us to make sure we were ready. 

670976ac92747c5a56d9f6fa_AD_4nXeHbdQKzeWFw4eSuqIQnf_D3UNsijx5DrzTuB21bjFLrP6uzQCGdqkc_x6aEeEWLPibrAjWJ_2El97_7vbXGsJOHa0KZGVE9TcQ8F8qfsr8I7L--dQ7hu2ScZvqTp3m2He6xI0x35PYLsAFqo3Z7RfmrAnL.webp

How to Get Started with Automation:

Rachit (Paylocity): As humans, it's easier for us to think in terms of framework because it helps us think about what is the journey and where do we want to be.

The car industry came up with this beautiful framework: L0 to L5. It tells you where you are in terms of maturity, i.e., L0 where you have no driver assistance. L1 is where you start with assistance, get partial assistance, conditional assistance, full assistance, and then go autonomous, which will be your L5. 

So what do you do? You: 

  • Invest in technology, and embrace Infrastructure As Code (IaC). 
  • Provide instrumentation with tools like Ansible to run commands at scale
  • Get partial assistance
  • Start observing conditions i.e.,  advanced observability
  • Embrace things like Mesh to automatically control the system
  • Now you have a fully automated system. The next step is going autonomous, where you leave the system and let it decide what’s best for the customers. 

Mo (GSK): GSK adopted cloud. Three things were very important to us: cost, performance, and security. Taking that together and doing the right sizing is the biggest challenge we see today.

People said they can solve the problem for us and that's why we are here.

Infrastructure Readiness Across Different Companies:

Jigar (Sisu): I have a perspective I want to share and this is like my journey over three different companies. 

When PayPal was going through the transformation, we had people staring at screens, worrying about every machine. We had to take some time because it was not just the technology change; it was also the cultural change where we had to get people along with us, and not just abandon them. That was my PayPal journey on how to become ready. not just from a technology perspective, but also from a people and culture perspective.

The mindset in Facebook was completely different. We were doubling in size in terms of machines, and I'm talking about millions of machines every year. So, readiness was not a word in our dictionary. You better be ready because machines are coming.

Then the very first day when we started building the system in Sisu, we had autonomy as a principle because we didn't have enough people to build a system that can be looked upon by folks standing at screens. So the autonomous system was built from the ground up with things like Sedai that you can start using on day one even as a small startup. 

670976ac509276867558c856_AD_4nXfEGXOjSzE8K7bj9RKb9X9A2jZxWc7bjQwz5JiRMxesJCxaILl-ugXNCoaskqVp8QzWc3X8Zep3T-AYHRzIydtCJLfh4NGKGTMJXFtjEavSO58HiXm-HmPZDCzCmqaHp8DkupCxCJTzPvuJAB9JZWZSj03R.webp

Subha (Wipro): The assessment of readiness is absolutely critical. As an example, one of our clients is a medical device manufacturer based out of Japan who we manage data centers in infrastructure for. 

Introducing Sedai would not even be an option for us because they are all bare metal. They are sitting in their own data centers and are not even virtualized in most cases. 

KnowBe4’s Readiness for Automation:

Tim (Sierra Ventures): Where was KnowBe4 on that maturation journey because I was quite impressed with how quickly you made the decision to deploy and start getting end-to-end value with Sedai.

Matt (KnowBe4): It was something that we had to be very deliberate about in order to achieve. I started at KnowBe4 in 2018. At that time, most of our software was running on EC2 instances, including databases and compute. In many cases, a single server processed and provided a lot of what we deliver to our customers. Our job as the SRE team was to clean that up while the Amazon bill was still four or five figures a month.

If we hadn't gone through that journey, it would be significantly harder now because our Amazon commit next year is millions and millions of dollars.

It was easy for us to implement Sedai because we were strong users of IaC and it only took us months to sign up with Sedai and get from 0% to 100% with them.

Framework to Test Automation Readiness: 

Tim (Sierra Ventures):There's no agreed upon framework of the maturation of a company to assess if it can start adopting automation. Does the industry need a framework that quickly tests the readiness to adopt automation? If yes, whose job is it to define this framework? 

Defining the Framework:

Subha (Wipro): It is hard to standardize a framework because of how fragmented the stack, the usage, the implications and the applications are. I think it has to be generic enough, but it won't then be solving the problem. It has to be coming from the customer and in consultation with somebody like Sedai, who has an understanding of how the system works.

Maturity Levels and Automation:

Rachit (Paylocity): In technology, it's less about “what” and more “how”. The how's are pretty standard as we don't have a lot of options.

When you walk into a data center and if you're naming your host with IPS or specific names, you know whether maturity is right. The next step usually is to automate this part. Once you automate that part, you graduate to the next level. That is how the framework would be agnostic to what industry you're from or the outcome you're looking for. 

Autonomous System Tools:

  1. As you go up the stack, step one usually is IaC, which is driver assist. 
  2. The next one is partial assistance. For example, if something is broken, I can run a command without understanding the system. This is where you introduce tools like Ansible or Control Tower that become the brain of your infrastructure. 
  3. The next step is around more instrumentation, where you start to gather more input and signals. You introduce tools like Datadog or more observability tool stack, so that you can drive better decisions.
  4. Then, it is tools like Mesh, which can route your traffic by being proactive around what could go wrong.
  5. The last step is autonomous, where tools like Sedai become a catalyst and help you jump from L3 to L5 without doing all that hard work. That is the magic of the solution.

L6 and KnowBe4’s Journey with Sedai:

Matt (KnowBe4): I think the barrier even goes back to the introduction of centralized logging and collection of data from these decentralized systems. I like the point that Sedai is an accelerant because you could get from L0 or L1 to L5 using some carefully tailored bash scripting. I almost want to introduce this idea of L6 where you have an AI-driven system that discovers things engineers or humans may have never even thought of. 

I don't think that KnowBe4 is at L6 yet. In some cases, we're not even at L5. The places where we're using Sedai are much more advanced than the places where we're not. It feels like almost a new tier. That's been a cool journey for us this year, and we're looking forward to how much stuff we can get to L5 and beyond..

Shibu (Geodis): We talk about institutions that are software and technology oriented. But for example, there is no place for Ansible in a PLC or a conveyor system. I cannot bring up a conveyor system by running a script. So it depends upon the industry as well.

In every industry, there is a story for tools like Sedai. So that's where the perspective of maturity comes into play. We cannot just define that maturity or the framework by looking at a technology powerhouse like Google or eBays. 

Risk Aversion and Autonomy:

Tim (Sierra Ventures): What is the right approach to implement autonomous systems? We know the benefits but how did you mitigate risk?

Autonomy Risk Management by KnowBe4:

Matt (KnowBe4):- If you're completely risk averse, you will be stuck on a lower level of autonomy. It is just a matter of taking a low risk instead of a high risk.

In our case, a lot of the building blocks were already in place. Our infrastructure was well defined by Terraform and already centralized modules. We knew 90% of our compute was being delivered by a handful of Terraform modules. That made it really easy for us to plug into that. They were also pulling the latest version of our module, so we didn't have to go through hundreds or thousands of repos and update to a new pinned version of that module. We were already taking risks by trying to be closer to the edge.

If you are looking to implement more automation, find places where you can approach the edge and implement Sedai or other tools like it. If they had problems, you could roll back quickly, tolerate a bit of an issue or down time if it were to happen.

Incremental Autonomy Implementation:

Tim (Sierra Ventures): One way to mitigate risk is by starting with 20% autonomous, and scaling your way up. I don't exactly recall, but KnowBe4 went 100% auto very quickly.

Matt (KnowBe4): We did once we were ready. You know, and we didn't start at 0 and go to 100 overnight.

We tailor picked a service that we knew would get some good utilization in production. Even as a beta feature, we had hundreds of customers testing and using this feature while we had Sedai enabled on it. Sedai was enabled throughout the entire process of building this new feature. 

Even the engineers working on that service didn't know that it was happening. We moved on from there and turned it on for all our development environments after we had seen a production service go through an entire release cycle for weeks with only cost savings and no issues.

When nobody asked what happened to the service, we felt pretty confident to open the floodgates.

Tim (Sierra Ventures): How is Pharmaland thinking about the journey and the implementation?

Pharmaland's Autonomous Journey:

Mo: For us, the most important thing was realizing we couldn't achieve our goals while in a data center. So, cloud adoption became our highest priority. We adopted Azure and GCP.

We began by adopting API and IaC - everything that’s stack driven - whether it was faster drug discovery or implementing supply chain solutions. The third part was how to sell faster with market data. 

Guardrails and Risk Management:

If you put appropriate guardrails, have appropriate people who can manage and operate the technology really well and understand the business, that's how you mitigate the risk. We have built guardrails. We start off with Dev environment, and move to non-prod and then production.

We still haven't gone fully auto but we hope to get there in the next couple of years.

Jigar (Sisu): At Facebook, our systems were autonomous. That means, somebody was able to push a change to the entire network and we would be disconnected from the internet for several hours. So blast radius with this level of automation is pretty high. That’s why you need to have enough guardrails and treat infrastructure code as “code”. If you are developing an application, you will not push your code to production without testing.

Non-financial Gains of Autonomy:

Tim (Sierra Ventures): What are some non-financial gains that you were able to trap? 

Rachit (Paylocity): When it comes to innovation, especially in the autonomous space, we are at a precipice where we have the right tools and environment; we just need the right actors now. 

Netflix and Autonomy:

We saw a similar story at Netflix around 2013, 2014, and 2015 when the culture in the industry was divided into development and operations. Development handled the build, while operations took care of deployment and infrastructure maintenance. Netflix came along and said, “This does not work for me. I want to move faster.”

So it built systems that helped people deploy more and more artifacts to production. The outcome was a 6000% increase in experimentation. They went from doing two, three, or four experiments a month to over a 1,000 experiments a day. As a result, people became hooked on Netflix. They loved Netflix not because someone really smart was sitting behind the screens figuring out what buttons to push or what movies to display, but because an autonomous system was making decisions about what could move forward and what could not.

Similarly, developers felt more comfortable rolling out pull requests (PRs). Every single PR was ready for production. If it was not ready, the system would block it and say, “Nope, you're not ready.” That was an autonomous system making decisions.

If you implement an autonomous system that helps you determine the right things to do, your customers will be happier. Your people will also be happier because they won't have to focus on mundane tasks; they can focus on more intellectually demanding and context-driven work.

On top of that, it frees up time for dependent teams. Companies that start to embrace autonomy now will see more innovation and disruption. They will be able to move faster because this is how R&D allocations work. There are companies out there with over $100 million in R&D, where 80% to 90% goes toward running the business. They are spending almost nothing on innovation. Doing so helps unlock those dollars and redirect them to actual growth, not just keeping the business alive.

Subha (Wipro): We have 250,000 employees globally, and a substantial portion of our costs goes into this employee base. In addition to the $12 million we generate from services, we have $450 million in annual recurring revenue from platforms.

To address these challenges, we had to create constraints or "starve the RTB" (Run the Business). This has led to ruthless prioritization of our RTB efforts. The savings we achieve are then reinvested into our internal core, which we refer to as our “core AI platform business”. Essentially, this is a generative AI platform we are orchestrating across multiple models, including some that are being developed by our R&D team. These models are tailored for specific tasks; for instance, some are more effective at text-to-voice conversion, while others excel in image processing.

We recently conducted a beta release with over 5,000 employees, and alongside the RTB reduction, we are also driving additional gains in other use cases, starting with HR. While this discussion may not focus on autonomy and infrastructure, it is related in principle. 

For example, a significant portion of our costs was tied to background checks and hiring processes. Previously, it would take us 7 to 10 days to conduct background checks. Now, thanks to these improvements, it only takes a couple of hours. This increase in productivity not only reduces the time required to onboard new employees but also lowers the overall cost of onboarding. This is just one example of the additional savings we are achieving.

Autonomy as a Business Enabler:

Jigar (Sisu):  

670976ac713338917a209595_AD_4nXcORmvr3tjFelfMU8mKpz7Fz_9AoLkmXDmCvHjJikxUI4iEcf7CKqLDMrQSs-UPm4mUXlD3kT6IMdlSb9w9FVJ73A3Gipcwc9vyig-nrfEpRmn_5Bf-keWxCvTuV6rF3UxiEnml3FVNChUMHxS7HSxv1zZK.webp

Automation can also serve as a business enabler. Complex business tasks can be solved using automation. At my current startup, a company in Europe said that “We need deployment in London, or we are not actually boarding on your platform.” Because we had automation, we could actually spin up a new instance just for them and serve them there.

There are many examples where your investment in automation can help grow business and not just save cost.

The L6 for Automation Technologies like Sedai:

Tim (Sierra Ventures): Give ideas on where you think technology like Sedai can go. What is that L6 you talked about? 

Matt (KnowBe4): As a customer, I would love some solutions for my ever-increasing CloudFront spend, which just keeps going up every time we gain more customers. The same goes for Aurora, RDS, and S3 spend, where these unbounded or provisioned environments continue to grow as you acquire more customers. 

I've defined a specific backend data store to be a specific size, and changing that means down time for my customers. Once you push the limits of the compute resources in the cloud, it becomes very difficult to manage. This is going to require more creative solutions, not all of which are immediately obvious. If you sit down and think about how you would address this with RDS, it presents a challenge.

Shibu (Geodis): The L6 would be to take tools like Sedai to the edge. i.e., A scaled-down Sedai that looks at only a few automation signals. It may require restarting some services before it happens. 

For example, if a conveyor goes down, it takes two or three hours to bring it back. How can we reduce that to the edge? We just need L2, so that someone can spark the battery again and get things going. That will give us more benefit if a conveyor goes down, the entire employee base will stay put. That's a big cost.

Subha (Wipro): In use cases like Sedai and infrastructure, you need higher precision. Sedai can grow significantly by creating LLM-like or transformer-like models for the infrastructure space, depending on the kinds of data you see. 

Autonomous Opportunities in LLM and GenAI:

Tim (Sierra Ventures): In an LLM and GenAI world, infra stacks are going to be rethought. Workloads are CPU-GPU hybrids. What are the autonomous opportunities in this realm?

Jigar (Sisu): There is a lot to be done. If you look at a typical GenAI lifecycle, there are three phases. One phase is data cleaning and data preparation, and I was so happy to see that Sedai is going to handle data platforms because it's a significant part of the cost, and there are a lot of optimization opportunities in just data prep space.

The second part of this is how you train the models. Whether you're using a generative AI model, such as an LLM that is available to you or open source, or you're developing your own model, training is super expensive. You are working with thousands of GPUs, or using thousands of GPUs in the cloud, which is also super expensive. The way we utilize resources for training models is probably a decade behind how we use production resources. Techniques and optimizations have not been applied to optimize GPU usage.

Then, the last phase is how you serve it. Inference is a massive cost. There is a different cost between GPT-3.5 and GPT-4. There is a significant opportunity to trim down the models so that you don’t have to serve these giant models for inference. This presents an opportunity that I generally refer to as the MLOps space, which includes everything from data preparation to training the model to serving the model. Sedai has the potential to become a billion-dollar business by addressing this new wave of developments.