Your monitoring shows no alerts. Your SLOs are green. Somewhere in that fleet, a pod has been running without a memory limit for four months, consuming whatever the node offers, one traffic spike away from taking down its neighbors.
That is not a hypothetical. That is Tuesday.
Kubernetes misconfigurations don't announce themselves. Missing limits, oversized requests, & poorly tuned autoscalers accumulate quietly across your fleet. They show up in your cloud bill first & in your incident history second. By the time they're visible, they've been running for months. In fact, 82% of Kubernetes workloads are overprovisioned.
Kubernetes anti-patterns span a wide surface: containers running as root, missing liveness & readiness probes, latest image tags, no PodDisruptionBudgets, flat RBAC, & absent network policies. Most teams know that list.
The anti-patterns covered here are different. They are the resource misconfigurations that look correct at launch, pass review, & accumulate silently across hundreds of workloads.
In this article, we will cover:
- The Anti-Patterns That Don't Page You
- Why Your Team Can't Catch Them in Time
- Closing the Loop Without Manual Audits
- Kubernetes Hygiene Is Not a One-Time Task
The Anti-Patterns That Don't Page You
Missing Resource Limits
A pod without CPU & memory limits will consume whatever the node offers. Undetected until load hits: a traffic spike, a noisy neighbor, a memory leak in a sidecar. It grows, other pods hit OOM Kills, & the resulting latency spikes get filed as application bugs. The configuration failure underneath never surfaces in the post-mortem because there is nothing in the error log pointing back to it.
Kubernetes does not enforce resource limits by default. Most teams skip them at launch & never come back.
Oversized Resource Requests
Engineers pad requests as insurance: "We might need 2 CPUs." The scheduler accepts that claim at face value. A node shows 90% allocated while actual CPU use is 18%. New pods can't be scheduled because the cluster looks full. The nodes underneath have a capacity the scheduler can't see.
That gap between allocated & actual is where cloud spend disappears. Overprovisioning remains the top source of cloud waste per Flexera's 2025 State of the Cloud report, & Kubernetes resource requests are a primary driver.
Requests-to-Limits Mismatch
This one gets skipped because it looks like good practice. You set limits. You set requests. The limit is 4x the request "for headroom." The node over-commits. Under memory pressure, the OOM killer uses the Kubernetes QoS class to decide what dies.
Burstable pods, those with limits greater than requests, are evicted before Guaranteed pods. You've built a kill order into your cluster without knowing it, & it only activates when you're already in an incident.
Poorly Tuned Autoscalers
HPA scaling on CPU percentage is the default. It is wrong for most latency-sensitive workloads. A service configured with 80% CPU as the HPA target can show two-second tail latency at 30% CPU: request queues build, users notice, the autoscaler does nothing. By the time it fires, the damage is done. On-call spends the next week blaming the application. The scaling threshold hasn't changed since the day it was deployed.
These four patterns share a structure: they look correct in code review, run without incident under normal load, & fail in ways that implicate everything except the configuration.
Kubernetes Misconfigurations That Look Correct Until They Don’t
See how Sedai identifies hidden Kubernetes misconfigurations & autonomously fixes resource inefficiencies before they impact performance or cost.

Why Your Team Can't Catch Them in Time
Most SREs know these patterns exist. Knowing doesn't translate to fixing at scale.
The Signal Gap
Teams rely on point-in-time metrics to audit resource configuration. A one-time rightsizing recommendation tells you what a pod consumed last week, not how consumption shifts across traffic patterns, deployment changes, & seasonal load.
CNCF's 2024 Annual Survey found resource optimization among the top Kubernetes operational challenges precisely because static snapshots can't track configuration drift across dynamic workloads. A recommendation based on last week's data is not visibility. It is a receipt for what already happened.
The Execution Gap
A fleet with 200 Kubernetes deployments has 200 separate resource configurations to review, test, & update, each with its own risk profile. Teams fix the ones already causing incidents. The rest stay misconfigured indefinitely, accumulating drift with every deployment, traffic shift, & code change. The misconfiguration from last quarter's launch is still running. So is the one from the quarter before that. Each new rollout adds fresh configurations on top of ones already drifting from production reality. The fleet doesn't stabilize between audits. It compounds.
That is not a backlog. It is your fleet's permanent state.
This is how anti-patterns become permanent. Autonomous Optimization for Kubernetes Applications & Clusters covers how continuous adjustment replaces periodic review. Not because teams don't care, but because continuous execution at that scope exceeds what any team has capacity for.
Closing the Loop Without Manual Audits
Both gaps close only if you continuously observe golden signals per workload: latency, error rate, saturation, & traffic volume. Model how each service performs under varying conditions, & use that to drive resource configuration changes grounded in production reality, not deployment-day assumptions. That model catches the requests-to-limits imbalance & the autoscaler thresholds that no longer match how the service actually behaves. Kubernetes Optimization on AWS: Challenges, Strategies, Tools addresses EKS-specific scheduling & scaling constraints.
Sedai's optimization engine is built on this principle. It moves past CPU snapshots to golden signal modeling per workload.
It closes the execution gap not by surfacing recommendations, but by acting autonomously: adjusting limits, rightsizing requests, & retuning HPA configurations across a fleet, with continuous safety verification at each step. Sedai has executed 100,000+ autonomous operations across customers, including Palo Alto Networks with zero production incidents.
That is not a better audit process. It is the removal of auditing as an ongoing operational cost. Kubernetes Cluster Lifecycle & 10 Optimization Strategies maps where misconfigurations accumulate across upgrades.
Kubernetes Hygiene Is Not a One-Time Task
Manual auditing can find these misconfigurations once. It cannot keep pace with the rate at which workload behavior changes, deployments multiply, & traffic patterns shift. Every configuration that looked right at launch is decaying against production reality right now.
The teams that stay ahead aren't running better audits. They've removed the audit cycle from the operational loop entirely.
The pods misconfigured in last quarter's rollout are still running. Your system finds them, or your pager does.
