Kubernetes was built to orchestrate Google’s global infrastructure. You are not Google. Terribly sorry.
82% of container users run Kubernetes in production. Most of them shouldn’t.
The Control Plane Tax
Before your application serves a single request, Kubernetes needs etcd chewing through 2-8 GB RAM per node. Then kube-apiserver, kube-scheduler, kube-controller-manager, kubelet (reserving 25% of node memory by default), CoreDNS, kube-proxy, and a CNI plugin. All of this before your code runs.
A production high-availability cluster needs 6-12 CPU cores and 12-24 GB RAM. For orchestration. Not for your application.
K3s calls itself the “lightweight” alternative. It still needs 1.6 GB RAM at idle. The lightest Kubernetes is heavier than most applications it runs. Marvellous.
The YAML Labyrinth
Deploy one service on Kubernetes and you’re looking at a Deployment (20 lines), a Service (19 lines), and an Ingress (27 lines). That’s 66 lines across 3 files. Minimum.
The same service on FreeBSD: one rc.conf entry. On Linux: one 10-line systemd service file.
Production services average 200+ lines of YAML. Per service. I’ve seen teams spend more time debugging YAML indentation than fixing bugs.
The Waste Report
Cast.ai surveyed 2,100+ organizations in 2024. The numbers are brutal:
- Average CPU utilization: 10%
- Average memory utilization: 23%
- 87% of provisioned CPU sits idle
You’re paying for 10 servers to do the work of one. Quite the bargain.
I keep thinking about what that means. Teams provision clusters. They set resource requests and limits. They watch the monitoring dashboards. Still, 90% waste. The orchestration layer is so complex that over-provisioning feels safer than under-provisioning. So everyone over-provisions.
The Operational Sinkhole
Komodor’s 2025 report found:
- 38% of companies have high-impact outages weekly
- 79% of incidents get triggered by a recent change
- Platform teams lose 34 workdays per year on troubleshooting
Kubernetes doesn’t reduce operational complexity. It adds another layer. You still deploy your application. Now you also maintain the platform that deploys your application.
When something breaks, is it your code? The base image? The CNI plugin? The ingress controller? The storage driver? The control plane? Good luck figuring that out at 3am.
The Invoice
Real companies are walking away. And saving money.
Amazon Prime Video cut costs by 90% by moving away from microservices back to a monolith. They published a blog post about it. The orchestration overhead wasn’t worth it.
37signals went from $3.2M per year to $1.3M per year after leaving the cloud. Over $10M saved in five years. Basecamp runs on their own hardware now.
GEICO spent a decade migrating to the cloud. Result: 2.5x higher costs. They’re not alone.
The pattern is consistent. Adopt Kubernetes. Discover the tax. Quietly move back. Nobody writes blog posts about “Why We Left Kubernetes” because it’s embarrassing. But they’re leaving.
The Staffing Multiplier
There are teams that went from 12 DevOps engineers to 3 after leaving Kubernetes. Nine engineers were maintaining the platform, not building the product.
When your orchestration layer requires more engineers than your application, something is wrong. That’s not a feature. That’s a tax.
The Certification Economy
CKA certification costs $445 per engineer. With training, you’re looking at up to $1,950. A team of five: nearly $10K. Before writing a single line of application code.
The complexity isn’t a bug. It’s a business model. Training companies, consulting firms, and cloud providers all benefit from Kubernetes being hard. The harder it is, the more they charge.
Who Actually Needs It
Kelsey Hightower, one of Kubernetes’ most prominent advocates, said this: “There’s a lot of extra concepts, config files, and infrastructure you have to manage to do something basic.”
If its greatest champion calls it challenging, maybe it is.
Kubernetes makes sense if you have 50+ services across multiple regions with a dedicated platform team. That describes maybe 5% of companies running it.
The other 95% need a server, a process manager, and the courage to admit it.
What’s Breaking Right Now (February 2026)
The problems aren’t getting better.
Ingress-NGINX Controller Retirement
The Ingress-NGINX controller is being retired in March 2026. No more fixes, patches, or updates. About 50% of cloud-native setups depend on it. Staying on it means increasing security risk. Migrating means rewriting ingress configs across hundreds of services. Pick your poison.
Security Vulnerabilities
Multiple CVEs from 2025-2026: config injection, auth bypass, DoS in ingress components. Credential caching flaws. Certificate validation issues. Some earlier flaws allowed unauthenticated remote code execution and full cluster compromise.
The attack surface is huge. Misconfigurations. Overly permissive RBAC. Insecure defaults. These aren’t edge cases. They’re common.
Complexity and Operational Pain
Upgrades break APIs. Operators lag behind. Networking and storage issues appear out of nowhere. Minor releases feel like migrations. Teams I’ve talked to budget a week for every Kubernetes upgrade. A week. For a minor version bump.
Dependency Risk
Cloud outages affect Kubernetes services. Azure incidents in 2025 took down managed Kubernetes clusters. When your orchestration layer depends on cloud infrastructure, you inherit all of its problems.
The Honest Answer
I genuinely don’t know how to feel about Kubernetes. It solves real problems for companies at Google’s scale. But most companies aren’t at Google’s scale.
The honest answer: if you’re running fewer than 20 services, Kubernetes is probably overkill. If you don’t have a dedicated platform team, Kubernetes will consume your engineering time. If you’re not multi-region, Kubernetes adds complexity without clear benefits.
Use systemd. Use Docker Compose. Use Nomad. Use whatever gets your code running with the least overhead.
The goal is to ship software. Not to operate the most sophisticated orchestration platform. Kubernetes is a tool. It’s not a religion. And for most teams, it’s the wrong tool.