Dark LLMs: When Your AI Traffic Is C2

Your “AI traffic” dashboard looks clean.

The bad news: some of that “normal” model API noise can already be C2.

Dark LLMs aren’t just writing better phishing copy anymore. We’re seeing self-rewriting droppers that call out to model APIs for fresh obfuscation, and backdoors quietly relaying commands through legit assistant endpoints — all tucked inside the same JSON your AI team says is “just inference.”

Meanwhile, BEC crews get WormGPT-tier language skills, threading into inboxes with localized, “compliance-friendly” lures that look exactly like your real finance team.

If you’re not enforcing AI egress allowlists, measuring model API patterns, and putting admin consent around every app registration, you’re basically letting adversaries rent space inside your AI stack and call it “innovation.”

What’s the one control you don’t have today that would actually make you sleep better about AI egress?

Full breakdown + concrete metrics (MTTR, egress allowlists, OAuth guardrails):

👉 https://blog.alphahunt.io/dark-llms-when-your-ai-traffic-is-c2

#AlphaHunt #CyberSecurity #ThreatIntel #AIsecurity #BlueTeam

Did you learn something new?