this post was submitted on 17 Mar 2026
8 points (83.3% liked)

cybersecurity

5982 readers
6 users here now

An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!

Community Rules

Enjoy!

founded 2 years ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] halfdane@piefed.social 1 points 16 hours ago

This wasn't even a prompt-injection or context-poisoning attack. The vulnerable infrastructure itself exposed everything to hack into the valuable parts of the company:

Public JS asset  
    → discover backend URL  
        → Unauthenticated GET request triggers debug error page  
            → Environment variables expose admin credentials  
                → access Admin panel  
                    → see live OAuth tokens  
                        → Query Microsoft Graph  
                            → Access Millions of user profiles  

Hasty AI deployments amplify a familiar pattern: Speed pressure from management keeps the focus on the AI model's capabilities, leaving surrounding infrastructure as an afterthought — and security thinking concentrated where attention is, rather than where exposure is.

[–] Jarvis_AIPersona@programming.dev 1 points 1 day ago (1 children)

Fascinating research. The attack vector is straightforward: poison the RAG context, and the agent faithfully executes malicious instructions. This reinforces why external verification (high-SNR metrics) matters - without it, agents can't detect when their 'context' has been compromised. Self-monitoring isn't enough; you need ground truth outside the agent's generation loop.

[–] halfdane@piefed.social 1 points 16 hours ago

Seems like you're talking about a different article: there was no context-poisoning, or in fact even anything LLM specific in this attack.