Let’s be honest: if you’re still using Readarr for book automation, you’re probably running into the same wall I hit six months ago — sluggish performance, brittle indexer integrations, and that sinking feeling every time the UI freezes while scanning 10,000+ books. Then I found bindery: a lean, Go-built, Usenet-first book downloader that doesn’t pretend to be a media server — it just downloads and organizes. No transcodes. No duplicate metadata bloat. No Electron renderer chewing 1.2 GB of RAM. Just Go, HTTP, and a clean API. At 161 stars (and climbing) on GitHub, vavallee/bindery isn’t just another fork — it’s a deliberate reboot of book automation for people who actually run Usenet stacks.
What Is Bindery — and Why Does It Feel Like a Breath of Fresh Air?
Bindery is an automated book download manager for Usenet, written in Go and designed from day one to plug into the existing self-hosted stack: SABnzbd for downloading, your indexer(s) (like NZBGeek, Drunken Slug, or NZBPlanet), and your existing library layout (calibre-style or just author/title folders). It monitors authors, tracks series, triggers searches when new releases drop, sends NZBs to SABnzbd via API, and moves completed downloads into your library — all with zero database bloat or background worker sprawl.
Unlike Readarr (v0.3.12.2491, last stable release as of May 2024), bindery doesn’t maintain its own SQLite DB and a full media library indexer. It doesn’t parse EPUBs or generate thumbnails. It doesn’t do “book matching” via fuzzy author name heuristics — instead, it leans on NZB title parsing + indexer-provided metadata, which is more reliable for Usenet (where filenames are the metadata). That means fewer false positives, faster scans, and — critically — no 15-minute startup lag because it’s not rebuilding a 30k-book index on boot.
Here’s the kicker: bindery is stateless by design. Your config is YAML. Your history lives in SABnzbd. Your library stays on disk exactly how you want it. There’s no “bindery library scanner” daemon chewing CPU at 3 a.m. — just a lightweight HTTP server (~12MB binary, ~35MB RSS on idle) and a cron-triggered or webhook-driven sync loop.
Installation & Docker Deployment (The Practical Way)
I’ve run bindery on three different setups: an old NUC (i5-6200U, 16GB RAM), a Hetzner CPX21 VM (2 vCPU, 4GB RAM), and a Raspberry Pi 4 (8GB, USB 3.0 SSD). It works fine on all — but the Pi needs GOGC=20 in the env to avoid GC pauses during large author scans.
The cleanest install is Docker Compose, especially since bindery expects SABnzbd to be reachable over HTTP (no bundled downloader). Here’s a minimal, production-ready docker-compose.yml I’ve been running for 3 weeks:
version: "3.8"
services:
bindery:
image: ghcr.io/vavallee/bindery:v0.4.2
container_name: bindery
restart: unless-stopped
environment:
- BINDERY_CONFIG_PATH=/config/bindery.yml
- TZ=America/Los_Angeles
volumes:
- ./config:/config
- /path/to/your/books:/books
- /path/to/your/downloads:/downloads
ports:
- "7878:7878"
depends_on:
- sabnzbd
networks:
- media
sabnzbd:
image: lscr.io/linuxserver/sabnzbd:4.3.2
container_name: sabnzbd
environment:
- PUID=1001
- PGID=1001
- TZ=America/Los_Angeles
volumes:
- ./sab-config:/config
- /path/to/your/downloads:/downloads
- /path/to/your/incomplete:/incomplete
ports:
- "8080:8080"
networks:
- media
Note: bindery does not include SABnzbd — and that’s intentional. You must point it at your existing SAB instance (or a fresh one, as above). That’s a hard dependency — no “bundled downloader” mode.
Once up, hit http://localhost:7878 and you’ll get a lightweight, no-JS settings page (yes, it renders in Lynx). No React. No WebSockets. Just HTTP forms and YAML config reloads.
Configuring Bindery: Authors, Indexers, and Real-World Tweaks
Your bindery.yml lives in the mounted /config volume. Here’s the config I use — stripped down, battle-tested, and annotated with what actually matters:
# bindery.yml
server:
port: 7878
host: "0.0.0.0"
base_url: "http://bindery.local:7878"
sabnzbd:
url: "http://sabnzbd:8080"
api_key: "your-sab-api-key-here"
category: "books"
indexers:
- name: "nzbgeek"
type: "nzbgeek"
url: "https://api.nzbgeek.info"
api_key: "your-nzbgeek-key"
enabled: true
library:
root: "/books"
format: "{{ .Author }}/{{ .Title }} ({{ .Year }})/{{ .Title }} - {{ .Author }}{{ .Extension }}"
# Bindery does *not* rename files — it moves them. So extension must match what SAB extracts.
monitors:
- author: "Robin Hobb"
search_on_startup: true
minimum_age_days: 1
- author: "N.K. Jemisin"
search_on_startup: false
minimum_age_days: 3
# Optional: webhook-triggered search (e.g., from a cron or ntfy)
webhooks:
- name: "daily-author-scan"
endpoint: "/api/v1/search/authors"
method: "POST"
enabled: true
A few real-world notes:
minimum_age_days: prevents bindery from grabbing pre-releases or mis-tagged test posts. I run this at1for big authors,3for niche ones.library.format: this is only used for the final move — bindery does not rename files inside the archive. If your SAB extracts tobook.epub, that’s what lands in/books/Author/Title/. No transcoding. No re-encoding.- Indexer auth: bindery supports NZBGeek, Drunken Slug, NZBPlanet, and Binsearch out of the box. It won’t work with private trackers that require cookies or login pages — it’s NZB-first, not torrent-first.
Bindery vs. Readarr vs. Manual Scripts: Where Does It Fit?
If you’re asking “why not just use Readarr?”, let’s be real: Readarr (v0.3.12.2491) is functional, but its Go runtime is buried under .NET 6, its UI is slow, and its Usenet support is a bolt-on afterthought. It tries to be Sonarr for books — but Sonarr was built for TV, not Usenet’s filename-driven reality. Bindery doesn’t try to be anything else.
Here’s how they compare on real metrics (measured on my NUC, idle + scanning 32 authors):
| Metric | bindery v0.4.2 | Readarr v0.3.12.2491 |
|---|---|---|
| RAM usage (idle) | ~35 MB | ~480 MB |
| CPU (scan 32 authors) | <2% avg, 1.2s max | 22% avg, 48s max |
| First boot time | <800ms | 18.4s |
| Config reload | curl -X POST http://localhost:7878/api/v1/config/reload |
Requires full restart |
| Indexer timeout handling | Graceful retry + log warn | Often hangs UI, logs show Indexer timeout then silence |
And if you’ve been rolling your own Python/bash + nzbget + inotifywait scripts? Bindery replaces all of that — but with proper concurrency, retry logic, and a real API. I killed my 3-year-old bookwatch.sh the day I got bindery’s /api/v1/monitor/author endpoint working.
That said: bindery does not do RSS polling. It’s webhook- or cron-driven. No built-in scheduler. You will need a curl cron job or systemd timer to trigger /api/v1/search/authors daily. I use this:
# /etc/cron.d/bindery-daily
0 4 * * * root curl -s -X POST http://localhost:7878/api/v1/search/authors > /dev/null 2>&1
Who Is Bindery For? (Spoiler: Not Everyone)
Let’s cut the ambiguity: bindery is for Usenet-first, SABnzbd-native, CLI-comfortable self-hosters. It is not for:
- People who want automatic Calibre integration (no Calibre API support — yet).
- Those relying on torrent indexers (no torrent support —
type: "torznab"is not implemented). - Users who need OPDS feeds or web-based book readers (bindery serves zero HTML pages beyond its config UI).
- Anyone expecting “one-click setup” — you will edit YAML and restart containers.
But if you:
- Already run SABnzbd + NZB indexers,
- Keep your books in flat or author-nested folders (not a database),
- Prefer
curlandgrepover clicking through 7 UI tabs, - Run your stack on low-RAM hardware (Pi, old NUC, LXC),
- And actually care about startup time and memory footprint…
…then bindery is the tool you’ve been waiting for.
Hardware-wise: it runs fine on 2GB RAM, but I recommend 4GB+ if you’re also running SABnzbd, Jackett, and maybe a lightweight reverse proxy. CPU is irrelevant — it’s I/O-bound on indexer HTTP calls and SAB API pings. Disk? Zero overhead — it doesn’t store copies or thumbnails.
The Honest Verdict: Is It Ready? What’s Rough?
I’ve run bindery v0.4.2 in production for 22 days. I monitor 47 authors across 3 indexers. It’s downloaded 112 books, with exactly 2 false positives (both due to indexer mislabeling “Robin Hobb” as “Robyn Hobb” — easily fixed with an alias in config, though bindery doesn’t support aliases yet).
Pros:
- Blazing fast. Seriously — you’ll notice the difference the first time you hit “Scan Authors” and it’s done before your browser finishes loading the spinner.
- Config-driven, not UI-driven. Version-control your
bindery.yml. Diff changes. Roll back in seconds. - Actively developed: 17 commits in the last 14 days (as of June 2024), all in Go, all with tests.
- Logs are clean, structured, and actually helpful:
level=info msg="Found 3 results for 'N.K. Jemisin'".
Cons:
- No UI for managing monitors — it’s all YAML or
/api/v1/monitor/*endpoints. Not terrible, but not beginner-friendly. - No built-in health checks or metrics endpoint (
/metricswould be nice for Prometheus). - No series tracking beyond “author + title match” — if you want “The Broken Earth Trilogy” as a unit, you’ll need to monitor each book individually (or write a wrapper).
- The API docs are in the GitHub README only — no Swagger, no interactive explorer. You will
curl -vyour way around at first.
So — is it worth deploying? Yes, if you’re in the target audience. It’s not polished like Readarr’s UI, but it’s more reliable for Usenet. It’s not feature-rich like a full library manager — but it doesn’t need to be. It does one thing, extremely well: watch authors, search indexers, send NZBs, move files.
The TL;DR:
✅ Use bindery if you want Usenet book automation that’s fast, lean, and fits your existing stack.
❌ Skip it if you need torrent support, Calibre sync, or click-to-run.
I’m keeping it. And I’ve already started writing a small CLI tool (binderyctl) to wrap the API — because when something’s this clean, the next step isn’t adding features… it’s making it easier to script.
You can grab the source, star the repo, and file issues at https://github.com/vavallee/bindery — v0.4.2 is current as of June 12, 2024. No corporate backing. No telemetry. Just Go, Git, and a very clear README. That’s the kind of self-hosted tool I want in my stack.
Comments