this post was submitted on 01 Dec 2025
1157 points (99.0% liked)

Programmer Humor

27604 readers
3040 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] baller_w@lemmy.zip 3 points 4 hours ago

Just …use docker

[–] Avicenna@programming.dev 13 points 9 hours ago

"I am deeply deeply sorry"

[–] MangoPenguin@lemmy.blahaj.zone 17 points 12 hours ago (2 children)

I wonder how big the crossover is between people that let AI run commands for them, and people that don't have a single reliable backup system in place. Probably pretty large.

[–] irelephant@lemmy.dbzer0.com 3 points 9 hours ago

I don't let ai run commands and I don't have backups 😞

[–] adminofoz@lemmy.cafe 4 points 11 hours ago

The venn diagram is in fact just one circle.

[–] irelephant@lemmy.dbzer0.com 5 points 9 hours ago (1 children)

Even Google employees were instructed not to use this.

[–] darkpanda@lemmy.ca 6 points 9 hours ago (1 children)

Ironically D: is probably the face they were making when they realized what happened.

[–] crank0271@lemmy.world 2 points 8 hours ago

Let's rmdir that D: and turn it into a C:

[–] Sunflier@lemmy.world 2 points 10 hours ago

And yet, they'll still keep tryjbg to shove it down our throats.

[–] yarr@feddit.nl 14 points 16 hours ago (1 children)

"Did I give you permission to delete my D:\ drive?"

Hmm... the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.

He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.

There's a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.

This guy let an LLM raw dog his CMD.EXE and now he's sad that it made a mistake (as LLMs will do).

Next time, don't point the gun at your foot and complain when it gets blown off.

[–] kadup@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago)

The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn't want to follow the steps and just said "do everything for me" which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.

So yes, technically the AI didn't simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.

[–] invictvs@lemmy.world 29 points 20 hours ago (3 children)

Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That's how the "Judgement day" is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.

[–] crank0271@lemmy.world 3 points 8 hours ago

"No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth..."

[–] immutable@lemmy.zip 10 points 17 hours ago (1 children)

I have been into AI Safety since before chat gpt.

I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.

The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.

[–] snugglesthefalse@sh.itjust.works 2 points 10 hours ago

The biggest concern I've always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.

[–] Michal@programming.dev 46 points 1 day ago* (last edited 1 day ago)

Thoughts for 25s

Prayers for 7s

[–] SlykeThePhoxenix@programming.dev 26 points 1 day ago (1 children)

I love how it just vanishes into a puff of logic at the end.

[–] T00l_shed@lemmy.world 3 points 17 hours ago

"Logic" is doing a lot of heavy lifting there lol

[–] glitchdx@lemmy.world 32 points 1 day ago (2 children)

lol.

lmao even.

Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.

load more comments (2 replies)
[–] NotASharkInAManSuit@lemmy.world 29 points 1 day ago (3 children)

How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.

load more comments (3 replies)
[–] pyre@lemmy.world 7 points 22 hours ago (5 children)
[–] cupcakezealot@piefed.blahaj.zone 5 points 18 hours ago

a misspelling of antimavity.

load more comments (4 replies)
[–] laurelraven@lemmy.zip 60 points 1 day ago (14 children)

And the icing on the shit cake is it peacing out after all that

load more comments (14 replies)
[–] Iheartcheese@lemmy.world 11 points 1 day ago
load more comments
view more: next ›