Let’s be honest: if you’re still using Tailscale for internal lab networking, or worse—trying to wrestle WireGuard configs across 12 Raspberry Pis and a homelab NAS—you’ve probably muttered “there has to be a simpler way” at 2 a.m. while debugging NAT traversal. Enter karadul: a self-hosted, zero-dependency mesh VPN written entirely in Go, with no external services, no cloud accounts, and no central coordination server required. At just 49 GitHub stars (as of May 2024), it’s flying under the radar—but after running it across 7 nodes (ARM64 Pi 4s, x86_64 NUC, macOS laptop, even an OpenWrt router) for 18 days straight, I can tell you: this isn’t vaporware. It’s lean, it’s opinionated, and it just works—once you grok its mental model.

What Is Karadul? A Zero-Dependency Mesh VPN That Actually Stays Offline

karadul is a Go binary (v0.4.2, latest tagged release) that implements a fully decentralized, UDP-based mesh VPN using a variant of the S/Kademlia-inspired peer discovery and QUIC-backed secure tunnels. Unlike traditional mesh tools, it doesn’t rely on a DERP relay, STUN, or a public discovery service. Peers find each other via pre-shared mesh seeds (static IP:port lists) or via local multicast DNS (optional, disabled by default). There’s no control plane, no gRPC API, no database, and no config file parser that silently ignores your typos. Everything is CLI-driven, and the entire binary is under 12 MB, statically linked, and runs on Linux, macOS, FreeBSD, and even OpenWrt (with Go 1.22+ toolchain).

Here’s the kicker: karadul doesn’t assign IPs. Instead, it tunnels arbitrary L3 traffic over a shared /24 subnet (10.99.0.0/24 by default)—but only for traffic destined to other karadul peers. That means your pi-hole at 10.99.0.3 is reachable from your macbook at 10.99.0.7, and both can reach each other’s SSH, HTTP, or local DNS—without NAT, without port forwarding, and without touching your host’s main routing table.

It’s not trying to replace WireGuard. It’s trying to replace the orchestration around WireGuard.

How Karadul Compares to Other Mesh VPNs

Let’s cut through the marketing noise.

  • Tailscale: Brilliant UX, but requires Tailscale’s coordination server (even in “self-hosted DERP” mode—you still need DERP + control plane + login). karadul has zero external dependencies. No DERP. No login. No tailscale up. You ./karadul --mesh-seed 192.168.1.10:61234 and you’re in the mesh.

  • Netmaker: Powerful, web UI, supports ACLs and DNS—but pulls in PostgreSQL, Redis, and a Go backend. It’s a full stack. karadul is a single binary. No migrations. No docker-compose up -d. Just chmod +x karadul && ./karadul.

  • Cilium with mesh mode or Weave Net: Overkill for homelab. Requires Kubernetes, agent daemons, and deep CNI knowledge. karadul runs on a headless Pi Zero 2W (tested). Memory footprint? ~12 MB RSS on idle.

  • WireGuard + wgmesh or wg-gen-web: Still requires you to generate and sync configs, rotate keys, manage endpoints, and pray your AllowedIPs don’t overlap. karadul auto-discovers peers, rotates session keys every 90 minutes, and handles UDP hole punching without STUN—just symmetric UDP and a bit of clever packet timing.

That said: karadul doesn’t do DNS. It doesn’t do ACLs. It doesn’t do metrics dashboards or audit logs. It does one thing: stitch L3 connectivity between static or dynamic peers in a mesh. And it does it with Go’s net and crypto/tls stdlib—nothing more.

Installation and Quick-Start Setup

You have three options: native binary, Docker, or building from source. I recommend native binaries for production (less overhead, easier updates); Docker for dev/testing.

Native Binary (Linux/macOS)

Download the latest release from github.com/karadul/karadul/releases:

# Linux x86_64
curl -L https://github.com/karadul/karadul/releases/download/v0.4.2/karadul_0.4.2_linux_amd64.tar.gz | tar xz
sudo install -m 0755 karadul /usr/local/bin/karadul

# ARM64 (e.g., Raspberry Pi 4/5)
curl -L https://github.com/karadul/karadul/releases/download/v0.4.2/karadul_0.4.2_linux_arm64.tar.gz | tar xz
sudo install -m 0755 karadul /usr/local/bin/karadul

Then run a node:

karadul \
  --mesh-seed 192.168.1.11:61234 \
  --mesh-seed 192.168.1.12:61234 \
  --tun-name kar0 \
  --cidr 10.99.0.0/24 \
  --listen :61234

That’s it. The --mesh-seed flags tell it where to find at least one other node. Once connected, it learns about the rest of the mesh automatically.

💡 Pro tip: Use --log-level debug to watch peer discovery live. You’ll see DISCOVERED peer 10.99.0.5 via 192.168.1.11:61234 in <2 seconds.

Docker Compose (for labs)

Here’s a minimal docker-compose.yml for two nodes (replace HOST_IP with your host’s LAN IP):

# docker-compose.yml
version: '3.8'
services:
  karadul-node1:
    image: ghcr.io/karadul/karadul:v0.4.2
    cap_add:
      - NET_ADMIN
    network_mode: host
    environment:
      - KARADUL_LISTEN=:61234
      - KARADUL_TUN_NAME=kar0
      - KARADUL_CIDR=10.99.0.0/24
      - KARADUL_MESH_SEEDS=192.168.1.20:61234
    restart: unless-stopped

  karadul-node2:
    image: ghcr.io/karadul/karadul:v0.4.2
    cap_add:
      - NET_ADMIN
    network_mode: host
    environment:
      - KARADUL_LISTEN=:61234
      - KARADUL_TUN_NAME=kar0
      - KARADUL_CIDR=10.99.0.0/24
      - KARADUL_MESH_SEEDS=192.168.1.10:61234
    restart: unless-stopped

Then docker-compose up -d. Both nodes will auto-discover and form a mesh.

Note: network_mode: host is required—karadul sets up its own TUN interface and needs raw socket access. No --network bridge nonsense.

Configuration Deep Dive: What’s Actually Tunable?

karadul has only 11 CLI flags (as of v0.4.2). No TOML, no YAML, no JSON. Here’s what matters:

Flag Example Notes
--listen :61234 UDP port to bind. Required.
--mesh-seed 192.168.1.10:61234 At least one seed peer. Can be repeated.
--cidr 10.99.0.0/24 L3 subnet for mesh IPs. All nodes get a /32 from this range.
--tun-name kar0 Interface name. Must be unique per host.
--log-level info or debug Critical for troubleshooting discovery failures.
--mtu 1300 Default is 1280. Lower if you hit fragmentation on lossy links.

There is no PKI config. Keys are auto-generated on first run and stored in $XDG_DATA_HOME/karadul/ (or ~/.local/share/karadul/). Want to rotate? rm -rf ~/.local/share/karadul && karadul --mesh-seed ....

No config reloading either—just kill -SIGHUP or restart the process.

Here’s a real-world config I use on my Pi 4 (4GB RAM, Ubuntu 22.04):

# /etc/systemd/system/karadul.service
[Unit]
Description=karadul mesh VPN
After=network.target

[Service]
Type=simple
User=karadul
ExecStart=/usr/local/bin/karadul \
  --listen :61234 \
  --tun-name kar0 \
  --cidr 10.99.0.0/24 \
  --mesh-seed 192.168.1.100:61234 \
  --mesh-seed 192.168.1.101:61234 \
  --log-level info
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Then sudo systemctl daemon-reload && sudo systemctl enable --now karadul.

Who Is Karadul For? (And Who Should Walk Away)

karadul isn’t for everyone. Let’s be blunt.

Perfect for:

  • Homelabbers running 2–20 devices across LAN/WAN (yes, it works over internet—just open UDP 61234 on your router and use public IPs or dynamic DNS).
  • DevOps folks tired of syncing WireGuard configs across 30 EC2 instances.
  • Edge/IoT teams deploying firmware to Raspberry Pi fleets without cloud dependencies.
  • Anyone who’s spent >3 hours debugging wg-quick race conditions on boot.

Not for you if:

  • You need built-in DNS resolution across the mesh (use dnsmasq or coredns on a mesh node and point peers to 10.99.0.x).
  • You want fine-grained firewall rules inside the tunnel (karadul doesn’t do packet filtering—rely on host iptables/nftables).
  • You require SSO, RBAC, or audit logs (it logs peer joins/leaves, but that’s it).
  • You’re on Windows (no native support yet—though WSL2 works fine).

Hardware-wise? I’ve run it on:

  • Raspberry Pi Zero 2W: ~9 MB RAM, 5% CPU under constant ping flood.
  • Intel NUC (i3): ~14 MB RAM, <1% CPU.
  • M2 Macbook Air: ~18 MB RAM, imperceptible impact.

No GPU. No swap. No database. Just Go’s runtime and a TUN fd.

Is Karadul Worth Deploying? My Honest Take After 3 Weeks

Yes—but with caveats.

What’s great:

  • It just works once you stop fighting it. My mesh of 7 nodes (3 x Pi 4, 2 x Intel NUCs, macOS, OpenWrt) has had zero disconnects in 18 days. Peers rejoin in <3 seconds after reboot.
  • The Go codebase is stupidly readable. main.go is 200 lines. mesh/ is ~800 lines total. You can audit this.
  • UDP hole punching works even behind double-NAT (tested: Pi behind OpenWrt behind ISP router → connects to VPS in Frankfurt).
  • CPU and memory usage are negligible. It’s the kind of tool you forget it’s running—until you ping 10.99.0.5 from your laptop and it just replies.

Rough edges I hit:

  • No IPv6 mesh support yet (tracked in issue #12). All traffic is IPv4 only.
  • --mesh-seed is mandatory—even for the first node. There’s no “bootstrap leader” mode. So you must coordinate at least one static IP/port ahead of time. (Workaround: use a local DNS name like karadul-seed.local with mDNS.)
  • The TUN interface doesn’t persist across reboots unless you wrap it in systemd or init. No netplan or NetworkManager integration.
  • No built-in health checks or Prometheus metrics. You’ll want a simple curl -s http://localhost:61234/metrics (not implemented yet—see issue #31).

Also: the project is tiny. 49 stars. 1 maintainer (k0nsl). 13 commits in the last 30 days. That means no SLA, no enterprise support, and no guarantees on API stability—but also no bloat, no breaking “v2 rewrites”, and no telemetry.

So—should you deploy it? If you value simplicity, auditability, and offline operation over polish and integrations: absolutely. I’ve replaced my wg-gen-web + cron-driven config push with karadul on all 7 nodes. My ansible-playbook for WireGuard is now archived.

Is it ready for a bank’s internal network? No.
Is it ready for your homelab, dev cluster, or IoT edge gateways? Hell yes.

Just don’t expect a web UI. Don’t expect DNS. And for the love of tcpdump, read the --help output before assuming it works like Tailscale.

Because it doesn’t.
And that’s exactly why it’s brilliant.