this post was submitted on 21 Mar 2025
764 points (98.8% liked)
Programmer Humor
21985 readers
2183 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Don't worry, I'm sure Cursor will be able to clobber your git history and force push to master any day now
Genuine question: what would it take to poison an LLM with ai tools to run
git push --force origin main
orsudo rm -rf /
Pen Tester here. While i don't focus on LLMs, it would be trivial in the right AI designed app. In a tool-assist app without a human in the loop as simple as adding to any input field.
&& [whatever command you want]] ;
If you wanted to poison the actual training set in sure it would be trivial, but It might take awhile to gain some respect to get a PR accepted, but we only caught an upstream attack on ssh due to some guy who feels the milliseconds of a ssh login sessions. Given how new the field is, i don't think we have developed strong enough autism to catch this kind thing like in SSH.
Unless vibe coders are specifically prompting chatgpt for input sanitization, validation, and secure coding practices then a large portion of design patterns these LLMs spit out are also vulnerable.
Really the whole tech field is just a nightmare waiting to happen though.
we just need a little more AI
You know, none of the “AI is dangerous” movies thought of the fact that AI would be violently shoved into all products by humans. Usually it’s like a secret military or corporate thing that gets access to the internet and goes rogue.
In reality, it’s fancy text prediction that has been exclusively shoved into as much of the internet as possible.