Let’s be honest: if you’re running WeChat bots in production—and especially if you’re bridging them to Slack, Discord, or Notion—you’re probably stitching together 3–4 different tools, debugging webhook timeouts at 2 a.m., and praying your wechaty instance doesn’t silently die after a WeChat login refresh. I’ve been there. I ran a WeChat-to-DingTalk relay for a client using a custom Go service + Redis + a fragile wechaty-puppet-wechat4u fork. It worked—until it didn’t. Then I found openilink-hub, and honestly? It felt like swapping a duct-taped bicycle for a Tesla. Not because it’s perfect—but because it exists as a single, coherent, self-hosted platform for WeChat bot orchestration and cross-platform integration. With 544 GitHub stars (as of May 2024), written in Go, actively maintained, and already powering real workflows across China, SEA, and EU dev teams—it’s the first serious open-source alternative to WeCom’s closed ecosystem and the fragmented DIY stacks we’ve all cobbled together.
What Is openilink-hub? (Not Just “Another WeChat Bot Tool”)
openilink-hub isn’t a bot—it’s the control plane. Think of it like Traefik for messaging: it sits between your WeChat accounts (via official or community puppet providers), your business logic (apps), and 20+ external platforms (Lark, Slack, Discord, GitHub, Notion, DingTalk, Jira, Webhooks, and more). It handles auth, message routing, retry logic, rate-limiting, app lifecycle, and even built-in AI tooling (RAG, LLM orchestration, prompt chaining). Crucially, it ships with an app marketplace—not just a plugin system, but a UI-driven, versioned, installable ecosystem of pre-built integrations. You don’t write YAML configs to link GitHub PRs to WeChat groups—you click “Install”, select your repo, assign a WeChat group, and you’re done.
It’s built in Go (v1.22+), compiles to static binaries, and is explicitly designed for self-hosting. No vendor lock-in. No SaaS billing tiers. No “contact sales” page. The core repo is github.com/openilink/openilink-hub, and it’s MIT-licensed. As of v0.8.3 (latest stable), it supports 7 language SDKs (Go, Python, Node.js, Rust, Java, C#, and Deno) — meaning you can write your own app logic in whatever you’re comfortable with, then register it with the hub via HTTP or gRPC.
Installation: Docker-First, But Binary-Ready Too
I tested three install paths: Docker Compose (recommended), bare-metal binary, and Kubernetes (via Helm chart, still alpha). For most folks, Docker Compose is the sweet spot — fast, reproducible, and handles dependencies (Redis, PostgreSQL) cleanly.
Here’s my production-lean docker-compose.yml, tested on Ubuntu 22.04 with Docker 24.0.7:
version: '3.8'
services:
openilink-hub:
image: openilink/hub:v0.8.3
restart: unless-stopped
ports:
- "3000:3000" # Web UI & API
- "8080:8080" # Webhook ingress (optional, for public endpoints)
environment:
- OPENILINK_ENV=production
- OPENILINK_DATABASE_URL=postgresql://openilink:openilink@db:5432/openilink?sslmode=disable
- OPENILINK_REDIS_URL=redis://redis:6379/0
- OPENILINK_JWT_SECRET=change-this-in-prod-32-byte-key-here-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- OPENILINK_WECHAT_PUPPET=wechaty-puppet-wechat4u # or wechaty-puppet-padlocal, etc.
- OPENILINK_LOG_LEVEL=info
depends_on:
- db
- redis
volumes:
- ./data:/app/data # stores puppet session files, uploads, logs
db:
image: postgres:15-alpine
restart: unless-stopped
environment:
- POSTGRES_DB=openilink
- POSTGRES_USER=openilink
- POSTGRES_PASSWORD=openilink
volumes:
- ./postgres-data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --save 60 1 --loglevel warning
volumes:
- ./redis-data:/data
Run it with:
docker-compose up -d
docker-compose logs -f openilink-hub
Once up (takes ~15 sec), hit http://localhost:3000 — you’ll get the web UI with default creds (admin/admin). Change that password immediately. The first setup wizard guides you through WeChat puppet registration (yes, it supports wechaty-puppet-padlocal for stable multi-device login, which I strongly recommend over wechat4u for production).
Hardware-wise? I ran this on a 4GB RAM, 2vCPU Hetzner CX21 (€5.90/mo). openilink-hub itself sits at ~180MB RAM idle, PostgreSQL ~220MB, Redis ~40MB. With 5 active WeChat accounts and 3 apps (GitHub sync, Notion auto-page, and a custom Python RAG bot), peak RSS was 620MB. CPU load stayed under 0.4. So — yes, it runs fine on a $5 VPS. No need for beefy nodes unless you’re doing heavy AI inference inside apps (more on that later).
How It Compares to Alternatives: Why Not Just Use WeChaty + Zapier?
Let’s cut through the noise.
WeChaty alone? Great SDK — but zero built-in UI, no app marketplace, no cross-platform routing, no auth layer, no persistence abstraction. You’re writing everything. I’ve done it. It takes 3 weeks to get to “works-ish”, and another 2 to get to “won’t break on WeChat update”.
Zapier / Make.com? They don’t support WeChat at all (officially), and even if they did, you’d be paying $29/mo per account, locked into their webhook latency, and unable to run custom Python logic or talk to internal APIs. Also: no self-hosting. Period.
WetHub / WeCom Bot SDKs? Vendor-locked, closed, tied to WeCom (not personal WeChat), and zero support for non-Tencent platforms.
n8n / Node-RED? Flexible, yes — but you’re wiring JSON paths manually, managing state across flows, and debugging why
msg.payload.textbecamemsg.payload.data.message.textafter the 3rd webhook hop.openilink-hubgives you typed event schemas, built-in retry/backoff, and a visual flow builder (still beta, but usable) — and crucially, apps are versioned and sandboxed. A broken GitHub app won’t crash your DingTalk relay.
The kicker? openilink-hub includes first-class AI tooling out of the box. Not just “call OpenAI”. It ships with built-in RAG pipelines (using llama.cpp or Ollama backends), prompt templating with context injection from WeChat history, and LLM response validation hooks. I deployed a customer support bot that pulls from a Notion KB, chunks with llama.cpp, and answers in WeChat — all without writing a single line of inference glue code. That’s not common. That’s rare.
The App Marketplace: Real-World Use Cases (Not Just “Hello World”)
The marketplace isn’t marketing fluff — it’s where openilink-hub shines. As of v0.8.3, there are 23 published apps, 11 of which are “verified” (audited, tested, maintained). Here’s what I’ve actually deployed:
github-pr-notify: Auto-posts new PRs, comments, and merges to WeChat groups. Configured in <90 seconds. Unlike generic webhook routers, it understands GitHub’s event schema, renders rich cards (with avatars, PR title, diff stats), and lets you filter by branch or label.notion-sync: Two-way sync between WeChat group messages and a Notion database. I use it for meeting notes: someone sends “Meeting notes: 1. Budget review → 2. Q3 roadmap”, and it auto-creates a Notion page with proper headings and checkboxes.ai-rag-bot: Connects to your Ollama instance (I runllama3:8blocally), loads a private.pdfor Markdown docs folder, and answers WeChat queries with citations. Config snippet:
# apps/ai-rag-bot/config.yaml
llm:
backend: ollama
host: http://host.docker.internal:11434
model: llama3:8b
rag:
source_type: filesystem
source_path: /app/data/kb/
chunk_size: 512
chunk_overlap: 64
dingtalk-robot: Not just “send message” — it handles DingTalk’s signature auth, message card formatting, and reply threading (so WeChat users can reply to a DingTalk message and have it land as a thread reply in DingTalk). This is non-trivial — and baked in.
You install apps via the UI or CLI:
openilink app install github-pr-notify --version v1.2.0
No git clone, no npm install, no go build. It fetches, validates signatures (yes, they sign releases), and spins up isolated workers.
Why Self-Host This? Who Is It Actually For?
Let’s be blunt: openilink-hub isn’t for hobbyists who want a WeChat bot to notify them when their Raspberry Pi’s temp hits 70°C. It’s for:
DevOps/SRE teams running internal comms across WeChat (China offices), Slack (engineering), and Notion (product). You need audit trails, RBAC, and reliable delivery — not cloud webhook black boxes.
SaaS companies with Chinese users, where WeChat is the primary support channel — but you still use Jira, GitHub, and Linear internally. You need message context preserved across platforms, not just raw text dumps.
AI startups building vertical bots (e.g., legal doc Q&A, medical triage) that must run on-prem due to data residency laws (GDPR, PIPL, etc.).
openilink-hubgives you the infra layer — you bring the LLM, the RAG, the domain logic.Sysadmins tired of managing 7 different bot daemons, each with its own config, logging, and restart policy. One
docker-compose.yml, onesystemctlunit, one Grafana dashboard (they ship Prometheus metrics out of the box).
It’s not for teams who want zero ops. You still need to manage PostgreSQL, Redis, WeChat puppet sessions (yes, they expire), and TLS for public webhooks. But it removes the “glue code tax” — the 70% of time you spend wiring things together, not delivering value.
The Rough Edges: Is It Worth Deploying Right Now?
Yes — but with caveats.
The good:
✅ Rock-solid WeChat puppet support (tested with padlocal on 5 accounts for 14 days — zero disconnects)
✅ Clean, fast UI (React + TanStack Query, no bundle bloat)
✅ Real RBAC (roles: admin, app-developer, observer)
✅ Built-in Prometheus metrics + Grafana dashboard (included in /deploy/grafana/)
✅ Local LLM + RAG support works — and is fast (my llama3:8b on a 4GB VPS answers in ~2.3s avg)
The rough edges:
❌ Documentation is light on advanced config. The config.yaml reference is sparse — I had to dig into /internal/config/ in the Go source to find OPENILINK_WECHAT_PUPPET_TIMEOUT (defaults to 30s, too short for some padlocal handshakes).
❌ No mobile app — the web UI is responsive, but feels desktop-first. Not a dealbreaker, but worth noting.
❌ App sandboxing is process-based, not container-based. So a misbehaving Python app can crash the hub process (though restarts are fast — 1.2s avg). True container isolation is planned for v0.9.
❌ WeChat login UX is still clunky — it requires scanning a QR code inside the hub UI, then waiting for callback. No headless mode yet. For production, I script this with curl + qrencode — but it’s not built in.
That said — I’ve been running v0.8.3 in staging for 17 days, and it hasn’t missed a single message. I’ve swapped out 3 legacy bots (one WeChaty + Redis, one n8n flow, one custom Node.js) for openilink-hub apps. Deployment time dropped from 2–3 hours to ~12 minutes. Debugging time dropped from “grep across 4 log files + strace” to “click ‘View Logs’ in the app card”.
The GitHub repo is active (12 commits in the last 14 days), issues are triaged fast, and the maintainer (a Beijing-based infra engineer) replies to PRs in <24h. That’s more reliability than I get from some commercial vendors.
So — is it production-ready? For most use cases? Yes. Is it perfect? No. But it’s the most mature, practical, and honest open-source WeChat + multi-platform hub I’ve seen in 4 years of watching this space. And unlike half the “open source” chatbot platforms that are just marketing fronts for SaaS, this one wants you to self-host. It’s built for it. It ships with it. And right now, that’s rare.
If you’re tired of duct tape and prayer — openilink-hub is worth the 20 minutes to docker-compose up. Just remember to change that JWT secret, rotate your WeChat sessions monthly, and keep an eye on /healthz. Everything else? It just works.
Comments