1242
you are viewing a single comment's thread
view the rest of the comments
[-] shea@lemmy.blahaj.zone 13 points 5 months ago

They're not "smart enough to be tricked" lolololol. They're too complicated to have precise guidelines. If something as simple and stupid as this can't be prevented by the world's leading experts idk. Maybe this whole idea was thrown together too quickly and it should be rebuilt from the ground up. we shouldn't be trusting computer programs that handle sensitive stuff if experts are still only kinda guessing how it works.

[-] BatmanAoD@programming.dev 2 points 5 months ago

Have you considered that one property of actual, real-life human intelligence is being "too complicated to have precise guidelines"?

[-] Cethin@lemmy.zip 0 points 5 months ago

Not even close to similar. We can create rules and a human can understand if they are breaking them or not, and decide if they want to or not. The LLMs are given rules but they can be tricked into not considering them. They aren't thinking about it and deciding it's the right thing to do.

[-] mikey@sh.itjust.works 4 points 5 months ago

Have you heard of social engineering and phishing? I consider those to be analogous to uploading new rules for ChatGPT, but since humans are still smarter, phishing and social engineering seems more advanced.

load more comments (2 replies)
load more comments (8 replies)
load more comments (8 replies)
this post was submitted on 10 Apr 2024
1242 points (99.0% liked)

Programmer Humor

19277 readers
1141 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS