Back to Library

HTTP/3 Winner: Caddy - A Modern Take on Reverse Proxying

December , 2025

HTTP/3 Winner: Caddy - A Modern Take on Reverse Proxying

Retiring the King: Why I Switched from Nginx to Caddy

Retiring the King: Why I Switched from Nginx to Caddy

After years of running production services with Nginx, I decided to migrate to Caddy. This isn't another "Nginx vs Caddy" comparison post—there are plenty of those. Instead, this is a real-world experience report from someone who's dealt with upstream routing, SSL/TLS configurations, protocol upgrades, and the headaches that come with maintaining complex reverse proxy setups.

Nginx is great. It's battle-tested, performant, and powers a significant portion of the internet. But sometimes "great" comes with friction—manual certificate management, configuration sprawl, cryptic debugging, and the occasional routing mishap that takes hours to track down.
This is my journey to something... simpler.

The Breaking Point: Routing Mismatches and SSL Chaos

Running multiple containerized services behind a reverse proxy should be straightforward. In practice? Not always.
I was managing around 14+ domains, each proxying to different Docker containers—everything from collaboration tools to password managers. The Nginx setup worked, but it had quirks:

Routing mismatches: Some domains occasionally served content from the wrong container
SSL complexity: Certbot was doing its job, but the dance between Nginx configs and Let's Encrypt renewals felt fragile
Cloudflare conflicts: Trying to run Nginx behind Cloudflare's proxy led to redirect loops and SSL handshake failures
Configuration sprawl: Files scattered across sites-available/, sites-enabled/, manual symlinks, and the perpetual question: "Which config is actually being used?"

The final straw? Spending an hour debugging why a domain was showing a 403 error, only to discover it was a missing IP whitelist entry buried in one of seventeen configuration files.
Time for a change.

Item Image

Enter Caddy: The Modern Approach

Caddy markets itself as "the web server with automatic HTTPS." That's nice, but the real win is simplicity without sacrificing power.
Here's what caught my attention:
Automatic HTTPS
No more Certbot. No more cron jobs for renewals. Caddy gets certificates from Let's Encrypt automatically and renews them before they expire. One less thing to worry about.
HTTP/3 by default
Nginx requires compilation with specific modules for HTTP/3. Caddy? Enabled out of the box.
Single configuration file
No more sites-available/sites-enabled confusion. Everything in one Caddyfile. Want to see all your services? Open one file.
Sane defaults
Security headers, modern TLS settings, compression—all configured sensibly without manual tuning.

Migration Strategy: One Service at a Time

Item Image

I'm not reckless. Switching your entire production stack in one go is asking for trouble. Here's the approach I took:

Install Caddy alongside Nginx (different ports for testing)
Migrate critical services first (password manager, file storage, internal tools)
Test with IP restrictions to ensure access control works
Verify SSL certificate provisioning for each domain
Monitor logs for any routing issues
Disable Nginx only after everything checks out

Configuration Comparison: The Difference is Real

# Nginx: Proxying a service with SSL, IP restrictions, and WebSockets

server {
    server_name app.example.com;
    
    location / {
        allow 1.2.3.4;
        allow 192.168.1.0/24;
        deny all;
        
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

server {
    if ($host = app.example.com) {
        return 301 https://$host$request_uri;
    }
    
    listen 80;
    server_name app.example.com;
    return 404;
}
# Caddy: Same service, same features

app.example.com {
    @allowed {
        remote_ip 1.2.3.4 192.168.1.0/24
    }
    
    handle @allowed {
        reverse_proxy localhost:8080
    }
    
    handle {
        respond "Access Denied" 403
    }
}

That's it. Caddy handles:
- ✅ Automatic HTTPS (certificate acquisition & renewal)
- ✅ HTTP to HTTPS redirect
- ✅ WebSocket upgrades (automatic)
- ✅ Proper headers (automatic)
- ✅ HTTP/3 support

Real-World Wins: DRY Principle in Action

One pattern I had in Nginx: repeated IP allow/deny blocks across multiple services. Same IPs, copy-pasted everywhere. Maintenance nightmare.

Code Block - Caddy Snippets

# Define once
(allowed_ips) {
    @allowed {
        remote_ip 1.2.3.4 192.168.1.0/24 172.16.0.0/12
    }
}

# Reuse everywhere
app1.example.com {
    import allowed_ips
    handle @allowed {
        reverse_proxy localhost:8001
    }
}

app2.example.com {
    import allowed_ips
    handle @allowed {
        reverse_proxy localhost:8002
    }
}

Change your IP? Update it once. Reload. Done.

Cloudflare Integration: The Surprise Pain Point

Here's something I didn't expect: Cloudflare + IP restrictions = complexity.
When you enable Cloudflare's proxy (orange cloud), all requests come from Cloudflare's IPs, not your actual visitor IP. For public services, this is fine. For IP-restricted internal tools? Problem.
The solution ended up being pragmatic:

Public services: Keep Cloudflare proxy enabled (DDoS protection, caching, WAF)
Internal tools: DNS only mode (direct connection to my server)
VPN services: Must be DNS only (VPN protocols can't work through HTTP proxy)

This hybrid approach gives me the best of both worlds.

graph LR A[User Request] --> B{Service Type} B -->|Public Services| C[Cloudflare Proxy] B -->|Internal Tool| D[Direct to Server] B -->|VPN| D C --> E[Caddy Reverse Proxy] D --> E E --> F[Containers]

Performance & Reliability

Item Image

After running Caddy in production for several weeks:

Uptime: Solid. Zero unexpected downtime.
SSL renewals: Automatic, flawless. I don't even think about certificates anymore.
Memory usage: Comparable to Nginx, slightly lower in my setup.
Response times: No noticeable difference.
Configuration reloads: Instant, no dropped connections.

The killer feature? Zero-downtime config reloads. Update your Caddyfile, run systemctl reload caddy, and it applies changes without dropping active connections.

Lessons Learned

What went smoothly:
- Installation and initial setup
- Certificate provisioning
- Basic reverse proxying
- Configuration validation

What required attention:
- IPv6 support (my connection uses IPv6, had to adjust IP matchers)
- Cloudflare integration nuances
- Understanding trusted proxy configuration
- Log file permissions for custom access logs

Would I do it again? Absolutely.

Code Block - Useful Aliases

# Make your life easier

# Edit Caddyfile
alias caddyedit='sudo nano /etc/caddy/Caddyfile'

# Validate configuration
alias caddyvalidate='sudo caddy validate --config /etc/caddy/Caddyfile'

# Reload Caddy
alias caddyreload='sudo systemctl reload caddy'

# View logs
alias caddylogs='sudo journalctl -u caddy -f'

# Check status
alias caddystatus='sudo systemctl status caddy'

Resources:

Official Caddy docs: https://caddyserver.com/docs/
Caddyfile syntax: https://caddyserver.com/docs/caddyfile