Let’s be honest: if you’re running more than three self-hosted services, you’ve probably hit the “subdomain spaghetti” problem. You’re juggling nginx configs, fighting Let’s Encrypt renewal scripts, and praying your location blocks don’t silently break a service after a reboot. I hit that wall hard—running Home Assistant, Vaultwarden, Ombi, and a custom LLM API on the same VPS—and spent two weekends debugging an nginx misconfiguration that was stripping X-Forwarded-For headers. That’s when I switched to Traefik v3.1—and haven’t touched nginx for routing since.

Traefik isn’t just “another reverse proxy.” It’s an auto-discovering, TLS-first, API-driven reverse proxy built for containers—and it treats your infrastructure like a living thing, not a static config file. As of May 2024, the Traefik GitHub repo has 63.2k stars, 12.4k forks, and over 600 open PRs—proof it’s actively maintained, not just another abandoned Docker-compose side project. More importantly: it just works when you’re running Docker, Docker Swarm, or Kubernetes. No more certbot renew && nginx -s reload cron jobs. No more ssl_certificate paths buried four layers deep in sites-enabled.

Here’s what this post covers—straight from my homelab, not the docs:

  • Why Traefik beats nginx for container-heavy self-hosting
  • A lean, production-ready Docker Compose setup (no bloat, no unused labels)
  • Real-world config gotchas—like why your /api routes 404 without traefik.http.routers.myapp.rule=Host(\app.example.com`) && PathPrefix(`/api`)`
  • How it stacks up against Caddy, nginx, and HAProxy
  • Who actually needs this—and who’s better off sticking with nginx
  • My brutally honest verdict after 117 days of daily use

Let’s go.

Why Traefik Is the Reverse Proxy for Modern Self-Hosting

The core idea is simple: Traefik watches your orchestrator (Docker, in our case) and automatically generates routing rules, TLS certificates, and load-balancing logic based on container labels—not config files. You declare intent, not infrastructure.

For example, this single label on a container:

labels:
  - "traefik.http.routers.homeassistant.rule=Host(`ha.example.com`)"
  - "traefik.http.routers.homeassistant.tls=true"

…is enough for Traefik to:
✅ Create an HTTPS router
✅ Issue and renew a valid Let’s Encrypt certificate (via ACME)
✅ Redirect HTTP → HTTPS
✅ Route traffic only to that container’s port 8123
✅ Auto-reload configuration—no traefik reload command needed

No server {} blocks. No ssl_certificate paths. No proxy_pass http://127.0.0.1:8123;. None of it.

That’s the magic—and why it’s so compelling for self-hosters. You’re not managing a proxy. You’re annotating your services.

Compare that to nginx: even with jwilder/nginx-proxy, you still need separate nginx-proxy-letsencrypt containers, custom vhost configs for non-standard paths, and zero native support for dynamic cert SANs. Caddy is close—but its automatic HTTPS only works on public domains by default, and its Docker integration is more “best effort” than “first-class.” HAProxy? Powerful, but it’s a network engineer’s tool—not something you casually tweak at 2 AM because your RSS reader stopped loading.

Traefik v3.1 (released March 2024) also ships with a built-in dashboard (/dashboard/), Prometheus metrics, and native support for Docker Compose v2.21+ labels—no more traefik.enable=true hacks.

Installing Traefik with Docker Compose (The Minimal, Secure Way)

I run Traefik on a 2GB RAM, 2 vCPU Hetzner Cloud instance (CX21). It uses ~180MB RAM idle, peaks at ~320MB under full load (7 services + dashboard + metrics). CPU is negligible—0.1–0.3% on average. It’s light, especially compared to nginx + certbot + companion containers.

Here’s my production docker-compose.yml—stripped of comments, optimized, and battle-tested:

# docker-compose.traefik.yml
version: "3.8"

services:
  traefik:
    image: "traefik:v3.1"
    container_name: "traefik"
    command:
      - "--api.insecure=false"
      - "--api.dashboard=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
      - "[email protected]"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080" # dashboard (bind to 127.0.0.1 in prod!)
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./letsencrypt:/letsencrypt"
    restart: unless-stopped
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik-dashboard.rule=Host(`traefik.example.com`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))"
      - "traefik.http.routers.traefik-dashboard.service=api@internal"
      - "traefik.http.routers.traefik-dashboard.tls=true"
      - "traefik.http.routers.traefik-dashboard.tls.certresolver=letsencrypt"

Key notes:

  • --providers.docker.exposedbydefault=false is critical. Without it, every container gets a public route. Don’t skip this.
  • --api.insecure=false disables the insecure /api endpoint—so /dashboard only works over TLS.
  • Dashboard is exposed on traefik.example.com/dashboard, not :8080. Much safer.
  • ACME cert storage is local (./letsencrypt/acme.json)—not in a volume, not in /tmp. Permissions: 600, owned by UID 65532 (Traefik’s default user).

Then run it:

mkdir letsencrypt && chmod 600 letsencrypt && docker compose -f docker-compose.traefik.yml up -d

That’s it. Traefik is live. Check https://traefik.example.com/dashboard (after setting DNS) to see the live router list.

Configuring Your First Self-Hosted Service (e.g., Vaultwarden)

Let’s hook up Vaultwarden. Here’s the exact docker-compose.yml snippet I use—no fluff, no extra labels:

services:
  vaultwarden:
    image: "vaultwarden/server:1.31.0"
    container_name: "vaultwarden"
    environment:
      - WEBSOCKET_ENABLED=true
      - DOMAIN=https://vault.example.com
    volumes:
      - "./vw-data:/data"
    restart: unless-stopped
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.vaultwarden.rule=Host(`vault.example.com`)"
      - "traefik.http.routers.vaultwarden.tls=true"
      - "traefik.http.routers.vaultwarden.tls.certresolver=letsencrypt"
      - "traefik.http.services.vaultwarden.loadbalancer.server.port=80"
      # WebSocket support (critical for push notifications)
      - "traefik.http.routers.vaultwarden.middlewares=vaultwarden-ws"
      - "traefik.http.middlewares.vaultwarden-ws.headers.customrequestheaders.Upgrade=websocket"
      - "traefik.http.middlewares.vaultwarden-ws.headers.customrequestheaders.Connection=upgrade"

Notice:

  • traefik.http.services.vaultwarden.loadbalancer.server.port=80 tells Traefik to talk to Vaultwarden’s internal port 80 (not 8000—the default HTTP port).
  • The vaultwarden-ws middleware is required for real-time sync—without it, push notifications silently fail. Traefik won’t warn you. You’ll just wonder why your iOS app doesn’t get updates.

Test it:

curl -I https://vault.example.com
# Should return 200 OK, not 502

If it fails, check docker logs traefik—90% of issues are DNS + Let’s Encrypt rate limits (use --certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory during testing).

Traefik vs. nginx vs. Caddy: The Real-World Trade-Offs

Tool Auto-TLS (Let’s Encrypt) Docker Auto-Discovery WebSocket Support Learning Curve RAM Use (idle) Best For
Traefik v3.1 ✅ Built-in, zero-config ✅ Native, label-driven ✅ Via middlewares Medium (labels + concepts) ~180MB Docker-heavy self-hosters
nginx (with nginx-proxy) ❌ Needs companion container ⚠️ Works, but brittle ⚠️ Manual config per service High (syntax + gotchas) ~15MB Static sites, legacy apps, low-resource VPS
Caddy v2.7 ✅ Automatic, public or local ⚠️ Requires caddy-docker-proxy plugin ✅ Native Low (Caddyfile is clean) ~25MB Beginners, small setups, local dev
HAProxy ❌ Manual cert injection ❌ None ✅ Excellent Very high ~10MB High-traffic, non-containerized infra

Here’s the kicker: if you’re running mostly containers, Traefik’s consistency pays off fast. I migrated 8 services in under 90 minutes—each needed only 3–5 labels. With nginx, I’d have written 8 separate server {} blocks, debugged 3 proxy_set_header issues, and lost an hour to ssl_trusted_certificate path typos.

But—Traefik sucks for serving static files or complex regex routing. Don’t try to replicate nginx’s try_files logic with Traefik middlewares. It’s possible, but ugly. And its logging is… minimal. You won’t see X-Forwarded-For IPs in logs unless you add traefik.http.middlewares.log-headers.headers.customrequestheaders.X-Forwarded-For=true—and even then, it’s not in the access log format you expect.

Who Is Traefik Actually For? (Spoiler: Not Everyone)

Traefik shines if you:

  • Run 5+ Docker containers, especially with dynamic ports or frequent redeployments
  • Want zero-touch TLS renewal, no cron jobs, no certbot errors at 3 AM
  • Use Docker Compose or Swarm (Kubernetes users should use the Traefik CRDs)
  • Are comfortable with declarative config (labels > config files)
  • Need observability: /dashboard, /metrics, and built-in health checks

It’s overkill if you:

  • Run only 1–2 services (just use Caddy—5 minutes to set up, zero maintenance)
  • Host mostly PHP/Python apps not in containers (nginx or Caddy are simpler)
  • Need fine-grained request rewriting (e.g., rewrite ^/old/(.*)$ /new/$1 permanent;)
  • Are on a 512MB RAM VPS (Traefik will run, but it’s tight; Caddy or nginx are safer)

Also: Traefik doesn’t replace your firewall. It assumes you’re binding only to 0.0.0.0:80/443. If you want to restrict dashboard access to 127.0.0.1, you must use --api.dashboard=false and reverse-proxy it yourself—or use traefik.http.routers.traefik-dashboard.entrypoints=websecure + IP whitelisting middleware (which adds 4 more labels).

The Verdict: Is Traefik Worth It in 2024?

Yes—but with caveats.

After 117 days of running Traefik v3.1 across 12 services (including a local Llama-3 API with 40GB models behind /llm/), here’s my unfiltered take:

Pros:

  • TLS renewal just works. Zero failures in 4 months.
  • The dashboard is genuinely useful—not just a gimmick. You see live request rates, cert expiration, and router status.
  • Middlewares are powerful: I use rateLimit, basicAuth, buffering, and compression across services—no per-app config.
  • It’s fast. Average request latency overhead: ~0.8ms (measured with ab -n 1000 -c 100 https://vault.example.com).

Cons:

  • Documentation is dense. The “Getting Started” guide assumes you know what a certresolver is.
  • No built-in LetsEncrypt staging toggle in the CLI—you must change the CA server URL manually.
  • Debugging failed ACME challenges is painful. Logs say acme: error: 429 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-order :: too many certificates already issued—but won’t tell you which domain hit the limit.
  • The traefik.http.middlewares system encourages copy-paste config. I have 7 near-identical basicAuth middlewares—no templating, no inheritance.

Hardware note: On my Raspberry Pi 5 (8GB RAM), Traefik uses ~220MB RAM and 5–8% CPU under load. It’s fine—but don’t expect it to run smoothly on a Pi Zero.

So—should you deploy it?
→ If you’re already using Docker Compose and have >3 services: yes, switch now. The time saved on TLS and config drift pays for itself in <1 week.
→ If you’re just starting out with self-hosting: try Caddy first. It’s simpler, lighter, and teaches the same core concepts. Move to Traefik when you add your 5th service.
→ If you’re running bare-metal Python/PHP apps: stick with nginx. Traefik’s Docker-centric model adds friction, not value.

Traefik isn’t perfect. But for the self-hosting crowd drowning in nginx configs and certbot cron jobs? It’s the most humane reverse proxy we’ve got. And after 117 days, I’m not going back.