AiSOC provides an open-source, self-hostable alternative to proprietary Security Operations Center (SOC) tools. Built in Python with an MIT license, the project from GitHub user beenuar (103 stars) handles alert fusion, purple-team drills, agent-assisted triage, and investigations mapped to MITRE ATT&CK tactics. It ingests security events from sources like Sysmon, Windows Security logs, M365 audits, Azure sign-ins, CloudTrail, Linux auditd, journald, EDR, DNS, web access, Kubernetes audits, GitHub audits, VPN, and database audits. The stack correlates events, runs AI-driven analysis via a LangGraph orchestrator, and displays results in a web-based SOC console.

Users get full visibility into AI processes: every prompt, response, cited evidence, tool call, and rationale appears in the Investigation Ledger, which supports replays for audits. Version 6.1.0 includes a public evaluation harness that gates pull requests with five test suites—a 200-incident synthetic dataset from 55 templates checks MITRE tactic coverage, investigation completeness, and response quality; a 1,000-alert stream measures alert reduction; and a schema validation suite covers the ~360 backing events in synthetic_telemetry.jsonl. These run deterministically in CI, ensuring substrate consistency without vendor dependencies.

Core features

AiSOC stands out through logged agent actions and infrastructure control. Key elements include:

  • Event ingestion and correlation: Pulls from 14 log sources, fuses alerts into incidents using Sigma rules and connectors.
  • AI investigation workflow: A ~600-line LangGraph script in services/agents/ drives autonomous triage, with steps like MITRE ATT&CK mapping and purple-team simulations.
  • Investigation Ledger: Stores full traces (prompts, LLM outputs, tool calls) for replay, enabling post-incident review.
  • Eval harness: CI-gated benchmarks on synthetic data measure alert reduction, tactic coverage, and rubric scores—details in apps/docs/docs/benchmark.md.
  • Self-contained deployment: No external callbacks; everything runs on local hardware, with seeded data for quick testing.

The project tags itself under GitHub topics like soc, siem, ai-security, and mitre-attack.

Getting it running

AiSOC supports multiple deployment paths, all self-hosted. A one-shot demo launches the full stack with seeded data in 3.5 minutes—exact steps appear in the README under that section. For public access, users set up their instance behind a Cloudflare Tunnel, similar to the maintainer's live demo at https://tryaisoc.com (which may go offline as it runs on personal hardware).

The repo links to deployment options covering Docker-based setups or direct service runs. Core files live in apps/, services/, and apps/docs/. PRs targeting main or develop trigger the eval harness automatically. Full docs reside in apps/docs/, including architecture diagrams and the benchmark explanation. Connectors and Sigma rules integrate via synthetic_telemetry.jsonl updates.

To expose a personal demo:

  1. Run the one-shot setup.
  2. Point a Cloudflare Tunnel at the local web app.
  3. Access via a custom domain.

The stack uses Python throughout, with no noted external model requirements beyond user-chosen LLMs.

Who this is for

Security teams needing AI assistance without cloud lock-in find AiSOC useful. Small ops centers or purple teams benefit from its MITRE-focused drills and triage, especially where audit trails matter—every agent step logs for compliance or debugging. Self-hosters avoiding data exfiltration suit it best; the lack of vendor telemetry appeals to privacy-focused admins.

Real-world fits include hybrid environments mixing Windows, Linux, cloud audits (Azure, AWS CloudTrail), and Kubernetes. Drill runs simulate attacks across 55 incident templates, helping teams practice without real threats. Contributors wire new log sources via open PRs, as the eval harness validates changes reproducibly.

It's less ideal for enterprises requiring Splunk-scale indexing or teams without Python/Docker familiarity—the substrate demands some infra tweaking.

How it compares

AiSOC differentiates on openness and auditability versus traditional SIEMs and AI SOCs. Its comparison table highlights:

Capability AiSOC Wazuh Splunk ES Closed-source AI SOC
Open-source license MIT GPL-2 proprietary proprietary
Self-hostable yes yes enterprise-only cloud-only
Autonomous AI investigation LangGraph no partial (Splunk AI) yes
Agent decision audit trail public Investigation Ledger n/a n/a not published
Public substrate eval harness CI-gated, reproducible, with synthetic telemetry n/a n/a n/a

Wazuh offers agent-based monitoring but skips AI orchestration. Splunk ES handles enterprise searches with limited AI, tied to licensing. Closed vendors provide autonomy at the cost of data egress and opacity.

AiSOC's ~600-line orchestrator invites forking, unlike black-box alternatives. Its eval uses fixed datasets (200 incidents, 1,000 alerts) for verifiable gains in alert reduction and MITRE coverage.

Architecture overview

The stack splits into ingestion (connectors for 14 sources), correlation (alert fusion), orchestration (services/agents/ LangGraph), and UI (web console in apps/web/). Synthetic data in synthetic_telemetry.jsonl (~360 events) backs benchmarks. Logs feed the Ledger for replays. PR guidelines in CONTRIBUTING.md encourage expansions.

Source code sits at https://github.com/beenuar/AiSOC; try the demo at https://tryaisoc.com or deploy your own via the README.