Bug-Bounty-Agents provides a set of 43 specialized AI agent prompts designed for bug bounty hunting, penetration testing, and red-teaming. Hosted on GitHub under matty69v/Bug-Bounty-Agents, the project has earned 105 stars and uses Shell as its primary language. These prompts turn general-purpose large language models (LLMs) into focused specialists for offensive security tasks, covering six engagement phases from reconnaissance to reporting.
The repository emphasizes simplicity: no frameworks or dependencies required. Users copy prompts from Markdown files into LLM clients like Claude Code, GitHub Copilot Chat, Cursor, or ChatGPT. Each agent enforces a strict scope, acting as a "drop-in persona" for tasks such as asset discovery or exploit chaining. A key note from the README: these are prompts that guide an LLM's behavior, not automated scanners. Operators still handle the tools and drive the process.
Core components
The agents organize into phases matching offensive security workflows. The full list appears in AGENTS.md, a machine-readable index.
Reconnaissance & Intelligence includes:
recon-advisor.md: For surface enumeration and asset discovery.osint-collector.md: Open-source intelligence gathering.subdomain-takeover.md: Dangling DNS and subdomain takeover checks.threat-modeler.md: STRIDE-based attack surface modeling.engagement-planner.md: Defining scope and rules of engagement.attack-planner.md: Planning multi-stage attack paths.
Web, API & Application starts with agents like web-hunter.md, though the excerpt cuts off further details. Other phases cover exploitation, chaining, and reporting, totaling 43 agents across the six phases.
Badges highlight 43 agents, 6 engagement phases, 4 supported clients, and 0 dependencies. Additional sections cover workflows, examples in the examples/ directory, Burp Suite MCP integration, and a one-line installer script.
Getting it running
Setup takes minutes via a shell script that auto-detects supported clients.
Clone the repository:
git clone https://github.com/matty69v/Bug-Bounty-Agents.git
cd Bug-Bounty-Agents
Run the installer:
./install.sh
This detects and configures clients automatically. Target specifics with flags:
./install.sh --target claude # Claude Code (global install)
./install.sh --target claude-local # Claude Code (project-local)
./install.sh --target copilot # GitHub Copilot Chat in VS Code
./install.sh --target cursor # Cursor (project-local)
./install.sh --target all # All detected clients
./install.sh --dry-run --target claude # Preview changes
./install.sh --uninstall --target claude # Remove setup
Manual per-tool instructions follow in the README. For Claude Code, paste the prompt into the agent interface after global or local install. GitHub Copilot Chat in VS Code uses @workspace slash commands post-install. Cursor loads prompts project-locally. Generic clients like ChatGPT or Gemini require manual copy-paste from .md files.
Prerequisites are minimal: Git and a supported LLM client. CI passes via .github/workflows/lint.yml. Updates pull latest changes with ./install.sh --update.
How to use an agent
Once installed, select an agent .md file, such as recon-advisor.md. Copy its content into your LLM client. Provide context like a target domain or scope, and the agent responds within its defined role—enforcing discipline to avoid scope drift.
Workflows chain agents across phases: start with engagement-planner for test plans, move to recon-advisor for assets, then web-hunter for vulnerabilities. Examples in the examples/ folder demonstrate interactions. Burp Suite integration uses MCP for prompt injection into the proxy.
The README stresses operator control: "You still drive the engagement." Prompts guide analysis, not execute scans.
Who this fits
Red-teamers, bug bounty hunters, and penetration testers who already use LLM clients benefit most. If you run Claude Code or Cursor during engagements, these prompts add offensive security personas without setup overhead. Solo hunters planning attacks or teams modeling threats find value in the phased structure.
It's suited for disciplined users who pair prompts with tools like Burp Suite or Nuclei. Contributors can add agents via the CONTRIBUTING.md process, under an MIT license.
Comparisons and scope
Unlike full frameworks such as Auto-GPT or LangChain agents, Bug-Bounty-Agents avoids dependencies entirely—pure prompts keep it lightweight at zero runtime cost beyond the LLM. It targets specific clients like Copilot Chat, differing from general prompt libraries.
For automated scanning, look elsewhere: projects like Nuclei or ffuf handle that. This excels in reasoning and planning, where LLMs shine, but skips execution. Heavier alternatives with tool-calling (e.g., CrewAI) add complexity this sidesteps.
Project details
Files include agent .md prompts, install.sh, AGENTS.md index, and examples/. Security notes and a disclaimer cover responsible use: prompts aid ethical testing only.
Operators needing quick LLM boosts for pentesting phases get 43 ready agents here. Source at https://github.com/matty69v/Bug-Bounty-Agents; website https://m-sec.tech. Those seeking turnkey scanners or non-LLM tools should pass.
Comments