Last Tuesday at 3:17am, my LLM agent cluster started hallucinating facts about my home router’s MAC address — returning 02:42:ac:11:00:03 instead of the real one — because I’d clobbered the working context cache by restarting the whole stack without persisting state. My “agent memory” was just a map[string]interface{} in Go, living and dying with each go run. I’d been paying $12/mo for a managed Redis tier just to hold session fragments, and still had zero audit trail, no versioning, and no way to ask “what did Agent-Beluga think about the firewall config three hours ago?” Yeah I know, classic mistake.

Stash showed up in my GitHub “You might like” sidebar while I was debugging a broken MCP client. I’d never heard of it, but the README said “persistent memory layer for AI agents” and “Postgres, not Redis” — and it was written in Go, single binary, no npm, no Python deps. No “cloud sync”, no telemetry, no “sign up” page. Just a 12MB Linux binary and a config.yaml. I downloaded it, ran ./stash --help, and stared at it for 47 seconds wondering if this was a prank.

I ran it on the same 2GB RAM VPS where my agents live. No Docker. Just:

curl -L https://github.com/alash3al/stash/releases/download/v0.4.0/stash_0.4.0_linux_amd64.tar.gz | tar xz
./stash --config config.yaml

My config.yaml was stupid simple:

server:
  addr: ":8080"
  tls: false

storage:
  driver: "postgres"
  dsn: "host=localhost user=stash dbname=stash password=supersecret sslmode=disable"

mcp:
  enabled: true
  server:
    addr: ":8081"

I already had Postgres running (same box, same pg_hba.conf), so I just createdb stash, ran ./stash, and hit curl http://localhost:8080/health. Got {"status":"ok"}. Then I curl -X POST http://localhost:8080/episodes -d '{"id":"ep-2024-05-11-001","agent":"router-auditor","facts":[{"key":"router.mac","value":"02:42:ac:11:00:02","source":"snmp"}]}'. Response: {"id":"ep-2024-05-11-001","created_at":"2024-05-11T03:22:18Z"}.

That’s it. No migrations. No CLI init wizard. No “first run” prompts. Just POST and go.

I wired it into my Python agent with requests.post("http://stash:8080/episodes", json=episode) — and suddenly, every time an agent finished scanning the network, it dumped facts + episode ID + timestamp into Postgres. I could now SELECT * FROM episodes ORDER BY created_at DESC LIMIT 5, or SELECT * FROM facts WHERE key = 'router.mac' ORDER BY created_at DESC. I stopped losing context across reboots. No more “Agent forgot the firewall rule it added 22 minutes ago”.

Here’s what broke, and how I fixed it:

  • The docs lie about --config: it requires a file, but the flag doesn’t default to config.yaml — if you omit it, it panics with open config.yaml: no such file. Took me 3 tries and a strace to confirm. Fixed by always passing --config ./config.yaml.

  • facts table has a value column TEXT NOT NULL, but my agent sometimes sent null values (e.g., "value": null for unknown firmware version). Stash accepted the POST, but Postgres threw null value in column "value" violates not-null constraint on insert. I had to wrap inserts in a value = value or "" preprocessor. Annoying, but easy.

  • MCP server mode does not auto-reload config changes. You have to kill -SIGHUP the process — which isn’t documented anywhere. I ran kill -SIGHUP $(pgrep -f 'stash --config') in a wrapper script. Works fine, but yeah, you’ll restart it manually if you tweak MCP settings.

  • The /episodes/:id endpoint returns facts but not the episode’s full JSON payload — just the facts array. If you want the original request body, you need to store it separately (or use notes field, which I missed for two days).

I compared it to two things I’d already tried and abandoned:

LangChain’s Postgres-backed ChatMessageHistory: It only stores messages — no facts, no episodes, no schema for “this fact was verified by SNMP, not user input”. And it needs sqlalchemy, psycopg2, and 6 layers of abstractions. I gave up after 4 hours trying to override the message table to add a verified_by column. Stash gave me that column out of the box — it’s literally called source.

RedisJSON + custom TTL logic: Worked fine until my Redis instance OOM’d during a network scan that dumped 14K facts in 90 seconds. Redis doesn’t do joins or history queries. Want to know “what facts changed between episode ep-2024-05-10-012 and ep-2024-05-10-013?” You’re writing Lua scripts or dumping JSON and diffing in Python. With Stash? SELECT f1.key, f1.value AS old, f2.value AS new FROM facts f1 JOIN facts f2 ON f1.key = f2.key WHERE f1.episode_id = 'ep-2024-05-10-012' AND f2.episode_id = 'ep-2024-05-10-013' AND f1.value != f2.value. Done.

Stash isn’t for you if you need vector search, full-text search, or real-time WebSocket sync. It’s not a replacement for Weaviate or Qdrant. It’s also not for teams that want built-in auth, multi-tenancy, or audit logs beyond what Postgres pg_stat_statements gives you. And if you’re still on SQLite and think “why not just use that?”, don’t — I tried. With >5K facts, /episodes endpoint started taking 1.8 seconds just to list IDs. Postgres? 12ms. The difference is not academic.

I ran Stash for 4 months straight on that same VPS. CPU stayed under 12% (even during bulk inserts), memory hovered at 42MB, and the Postgres table grew to 32K rows with zero index bloat. I added a CREATE INDEX CONCURRENTLY ON facts(key) after week two — query latency dropped from 18ms to 2ms on hot keys like router.firmware.

The /mcp endpoint worked — but only if your client speaks raw MCP 0.1.0 JSON-RPC over HTTP. I used it with mcp-server-go and mcp-client-py, but got tripped up when my client sent {"method":"list_tools"} instead of the required {"method":"mcp.list_tools"} — Stash’s MCP server doesn’t auto-prefix. The error message was method not found, and it took me an hour to spot the missing mcp. prefix. Not a bug — just strict spec compliance. Fair.

One thing I love: Stash logs every insert, every query, and every error to stdout, with timestamps and episode IDs. No log levels to toggle. No “enable debug mode” flag. Just plain lines like:

2024-05-11T03:22:18Z INFO episode.created id=ep-2024-05-11-001 agent=router-auditor facts=3
2024-05-11T03:24:01Z ERROR fact.insert_failed key=router.mac error="null value in column \"value\" violates not-null constraint"

I pipe it to journalctl --identifier=stash, and it’s right there when things go sideways.

It’s not magic. It doesn’t auto-tag facts or infer relationships. It won’t suggest next steps or generate summaries. It just stores what you give it, lets you query it, and stays the hell out of your way. Which, after 6 years of fighting over-engineered AI infra, feels like coming home.

Stash is the Postgres extension you didn’t know you needed — until your agents start lying about MAC addresses at 3am.