this post was submitted on 23 Mar 2026
3 points (66.7% liked)

Cybersecurity

9721 readers
79 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 2 years ago
MODERATORS
 

I built 7 MCP servers connecting AI agents to security tools. Here's what I learned.

The servers cover Wazuh (SIEM alerts, agent management, vulnerability scans), Zeek (network connection logs, DNS, SSL), Suricata (IDS alerts, flow data), TheHive (case management), Cortex (observable analysis), MISP (threat intel), and MITRE ATT&CK (offline technique/tactic lookups).

The protocol layer was straightforward. TypeScript SDK, Zod validation, stdio transport. Getting a model to call get_alerts with the right parameters and get data back took maybe a day per server.

The hard part was everything after the connection worked.

Security telemetry has 40+ fields per alert. Dump all of that into a model, and you get vague summaries that touch everything and understand nothing. My first versions returned raw API responses. The model would pick whatever was easiest to talk about, rather than what actually mattered.

So the real work became context design. What does the model see first? What time window makes sense? Which fields help with triage, and which are metadata noise? For Wazuh, I filter to severity 8+ and return a focused subset. For Zeek, I pre-aggregate by source/dest pair and surface unusual patterns first. For Suricata, I separate IDS alerts from flow metadata.

The payoff: a Wazuh alert fires, the model checks Zeek for that host's network activity, queries ATT&CK for technique mapping, and checks MISP for threat intel on the destination IP. That correlation chain used to take 15 minutes of clicking through interfaces. Now it takes one question.

Not replacing analysts. Killing the mechanical evidence-gathering that burns time before a human reaches the real decisions.

Project page: https://solomonneas.dev/projects/security-mcp-servers Full write-up: https://solomonneas.dev/blog/building-security-mcp-servers Code: github.com/solomonneas

you are viewing a single comment's thread
view the rest of the comments
[–] solomonneas@lemmy.world 1 points 20 hours ago (1 children)

In this particular instance, I mean you, as in the user, but it can be triggered by agents and run as cron jobs as needed. They work in tandem.

[–] mrnobody@reddthat.com 2 points 20 hours ago (1 children)

Nice, that's a lot of work, but I can most definitely appreciate the time and effort. Did you do this for your homelab or at work? Or just because, as it might help out others too?

[–] solomonneas@lemmy.world 2 points 20 hours ago

Originally it started for work. I run a network lab for Network Engineering students, and wanted a SOC stack for our network that cybersecurity students could also get hands-on practice with. I threw the stack together there, then replicated it at home on Proxmox so I could keep building on it. Just want to share what I've made for others.

It can definitely run as a homelab setup. The main constraint is RAM since TheHive and Cortex are Java-based and hungry. Wazuh's Indexer (OpenSearch) is another one. But you wouldn't need the full stack for home. Wazuh + Suricata + the MITRE ATT&CK server alone would cover most of what a home network needs, and that's way lighter.