I built 7 MCP servers connecting AI agents to security tools. Here's what I learned.
The servers cover Wazuh (SIEM alerts, agent management, vulnerability scans), Zeek (network connection logs, DNS, SSL), Suricata (IDS alerts, flow data), TheHive (case management), Cortex (observable analysis), MISP (threat intel), and MITRE ATT&CK (offline technique/tactic lookups).
The protocol layer was straightforward. TypeScript SDK, Zod validation, stdio transport. Getting a model to call get_alerts with the right parameters and get data back took maybe a day per server.
The hard part was everything after the connection worked.
Security telemetry has 40+ fields per alert. Dump all of that into a model, and you get vague summaries that touch everything and understand nothing. My first versions returned raw API responses. The model would pick whatever was easiest to talk about, rather than what actually mattered.
So the real work became context design. What does the model see first? What time window makes sense? Which fields help with triage, and which are metadata noise? For Wazuh, I filter to severity 8+ and return a focused subset. For Zeek, I pre-aggregate by source/dest pair and surface unusual patterns first. For Suricata, I separate IDS alerts from flow metadata.
The payoff: a Wazuh alert fires, the model checks Zeek for that host's network activity, queries ATT&CK for technique mapping, and checks MISP for threat intel on the destination IP. That correlation chain used to take 15 minutes of clicking through interfaces. Now it takes one question.
Not replacing analysts. Killing the mechanical evidence-gathering that burns time before a human reaches the real decisions.
Project page: https://solomonneas.dev/projects/security-mcp-servers Full write-up: https://solomonneas.dev/blog/building-security-mcp-servers Code: github.com/solomonneas