You've got a site that's ranking lower than it should. The content looks fine from the outside. Pages load fast. But something is off. A traditional SEO tool scans the surface: meta tags, headings, internal links. It never touches your actual code. Meanwhile, Google Search Console is sitting there with data about how your pages are performing in real search results, and nobody is combining that with what's actually in the repo.

serpiq exists to bridge that gap. It's a CLI tool built in TypeScript that audits SEO by first reading your codebase, then pulling real Google Search Console data to ground the audit in actual search performance. The project is open source, runs locally, and requires you to bring your own LLM.

The approach

Most SEO tools work from the browser side. They crawl your site, check meta tags, flag missing alt text, count headings. serpiq starts elsewhere. It reads the codebase first. That means it can see your templates, routes, content files, and the structure your team actually wrote. Then it connects to Google Search Console to pull metrics like impressions, clicks, and average position for the pages it found.

The idea is that code-level context plus real search data produces a more grounded audit than either alone. If a page has high impressions but low clicks, the tool can cross-reference that with what the code is doing and flag structural or content issues specific to that page.

It's also designed as a CLI, not a web dashboard. Output goes to your terminal. You feed it your repo, your Search Console credentials, and an LLM endpoint. It doesn't ship with a built-in model. You point it at whatever LLM you prefer, which keeps the tool lightweight and keeps your data flowing through your own infrastructure rather than someone else's API.

Trying it out

The project is on GitHub with 25 stars at the time of writing. Being TypeScript, it likely expects a Node.js environment. The project page at manojahi.github.io/serpiq has the setup details. A typical workflow would involve installing the CLI, configuring your Search Console OAuth credentials, and pointing it at your LLM provider.

# Example installation (check the repo for current commands)
npm install -g serpiq

Then run it against a directory:

serpiq audit ./my-site --gsc-credentials ./gsc.json --llm-endpoint https://api.my-llm.com

The exact flags and config file format should be checked in the repository documentation. The project page lists the expected environment variables and a sample config file.

What it doesn't do

serpiq is not a site crawler. It doesn't simulate Googlebot or check rendering. It works from the code you already have, not from what a browser sees. If your site uses server-side rendering with heavy JavaScript hydration, the tool won't verify that the final DOM matches your templates.

It also doesn't include an LLM. You need to provide one. That could be an OpenAI-compatible endpoint, a local model, or anything else that fits the interface. If you don't have an LLM available or don't want to manage one, this tool won't work out of the box.

It's also early. With 25 GitHub stars, the project is small and the maintainer appears to be a solo developer. The feature set is focused: codebase reading plus Search Console integration plus LLM analysis. There's no built-in reporting dashboard, no scheduled re-audits, and no plugin ecosystem.

Who it's for

If you run a site where you have direct access to the codebase and you already use Google Search Console, serpiq gives you an audit that's tied to real search data rather than generic best practices. It's a narrow tool. It won't replace a full SEO platform. But for developers or small teams that want CLI-driven audits grounded in their own code and actual Google metrics, it's worth a look. The source is on GitHub and the project site is at manojahi.github.io/serpiq.