NGINX sits between your users and your application. Before a single request reaches your code, NGINX is parsing configs, terminating SSL, rewriting URLs, and logging everything. All of this overhead. All of this complexity.

The Ingress-NGINX controller is being retired in March 2026. About 50% of cloud-native setups depend on it. No more fixes. No more patches. Migrating means rewriting ingress configs across hundreds of services. Staying means increasing security risk. Pick your poison.

The Reverse Proxy Tax

NGINX needs memory. A basic setup uses 50-100 MB RAM. Production configs with caching, rate limiting, and complex rewrites: 200-500 MB. Per instance. Before your application runs.

CPU overhead isn’t trivial either. SSL/TLS termination chews through cycles. Every request gets parsed, matched against location blocks, rewritten, logged. High-traffic sites run multiple NGINX instances. Each one consuming resources.

There are teams that run NGINX in front of applications that could serve traffic directly. The reverse proxy adds latency. It adds failure points. It adds operational burden. For what?

The Configuration Maze

nginx.conf starts simple. Then you add SSL. Then rate limiting. Then URL rewrites. Then custom headers. Then logging. Then caching. Before you know it, you’re maintaining 500+ lines of nginx.conf across multiple files.

The location block matching rules are arcane. Regex patterns that worked in testing break in production. Order matters. Context matters. One misplaced semicolon breaks everything.

Production configs average 300-800 lines. Teams spend more time debugging nginx.conf than fixing application bugs. I’ve watched engineers spend days tracing why a rewrite rule isn’t matching. Days. For a reverse proxy config.

The SSL Termination Cost

NGINX terminates SSL/TLS. That means decrypting every incoming request and encrypting every response. CPU intensive. Memory intensive. For high-traffic sites, this is expensive.

You could terminate SSL at the load balancer. Or at the application. But teams default to NGINX because it’s “the standard way.” Standard doesn’t mean optimal.

Modern applications can handle TLS directly. Go, Rust, and other languages have excellent TLS libraries. The overhead is minimal. But teams still route through NGINX “just in case.”

The Retirement Problem

The Ingress-NGINX controller is being retired in March 2026. The Kubernetes project is moving to Gateway API. Ingress-NGINX gets no more fixes, patches, or security updates.

About 50% of cloud-native setups depend on Ingress-NGINX. That’s thousands of clusters. Migrating means:

  • Rewriting ingress configs across hundreds of services
  • Testing new Gateway API implementations
  • Dealing with breaking changes
  • Potential downtime during migration

Or you stay on Ingress-NGINX and accept increasing security risk. No patches means vulnerabilities accumulate. CVEs get published. Your cluster stays vulnerable.

This isn’t a theoretical problem. It’s happening now. Teams are scrambling to migrate or accept the risk.

The Observability Overhead

NGINX logs everything. Access logs. Error logs. Custom logs. High-traffic sites generate gigabytes of logs per day. Storage costs add up. Parsing costs add up.

Teams add log aggregation. ELK stacks. Loki. CloudWatch. More infrastructure. More complexity. More cost. All to handle logs from a reverse proxy.

Your application probably logs too. So you’re parsing and storing logs twice. Once from NGINX. Once from your app. Duplicate effort. Duplicate cost.

Real Companies Removing Layers

I know teams that removed NGINX entirely. Their applications serve traffic directly. TLS termination happens at the application level. Simpler. Faster. Cheaper.

One team cut infrastructure costs by 30% after removing NGINX. Fewer instances to run. Less memory. Less CPU. Less operational overhead.

Another team moved from NGINX to Caddy. Automatic HTTPS. Simpler config. Lower resource usage. They’re happy with the switch.

The pattern: add NGINX “because everyone does it.” Discover the overhead. Remove it when possible. But nobody writes blog posts about removing NGINX because it’s embarrassing to admit you didn’t need it.

Who Actually Needs It

NGINX makes sense if you need:

  • Complex URL rewriting across multiple backends
  • Advanced rate limiting and DDoS protection
  • Static file serving with caching
  • Load balancing across multiple application instances
  • Legacy application support (apps that can’t handle TLS directly)

Most applications don’t need these. Modern applications can handle TLS. They can serve static files. They can implement rate limiting. They don’t need a reverse proxy layer.

If you’re running a simple API or web application, serve it directly. Use your application framework’s built-in TLS support. Skip the reverse proxy.

If you need load balancing, use a cloud load balancer or a simpler proxy like Caddy or Traefik. They’re easier to configure. They use fewer resources.

The Honest Answer

I genuinely don’t know how to feel about NGINX. It solves real problems for complex setups. But most setups aren’t complex.

The honest answer: if your application can handle TLS and serve traffic directly, skip NGINX. If you need simple load balancing, use a cloud load balancer or Caddy. If you need complex routing, then NGINX might make sense. But most teams don’t need complex routing.

The goal is to serve requests, not to operate the most sophisticated reverse proxy configuration. NGINX is a tool. It’s not a requirement. And for most applications, it’s unnecessary overhead.

With Ingress-NGINX being retired, now is a good time to ask: do you actually need a reverse proxy? Or can your application serve traffic directly?

Most applications can. Most teams just don’t realize it.