this post was submitted on 07 Apr 2026
310 points (98.7% liked)

Fuck AI

6692 readers
41 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
top 41 comments
sorted by: hot top controversial new old
[–] renzhexiangjiao@piefed.blahaj.zone 117 points 4 days ago (2 children)
[–] Madrigal@lemmy.world 62 points 4 days ago (1 children)

I’ve literally seen someone include “Don’t hallucinate” in an agent’s instructions.

[–] rozodru@piefed.world 28 points 4 days ago (1 children)

Asking Claude to not hallucinate is like telling a person to not breathe. it's gonna happen, and happen conistently.

[–] FrederikNJS@piefed.zip 39 points 4 days ago (1 children)

I think the important bit to understand here is that LLMs are never not hallucinating. But they sometimes happens to hallucinate something correct.

[–] Kirk@startrek.website 22 points 4 days ago

This fact of how LLMs work is not at all widespread enough IMO.

[–] driving_crooner@lemmy.eco.br 20 points 4 days ago

"Include no bugs"

[–] Ibuthyr@feddit.org 83 points 4 days ago (3 children)

Writing all these prompts almost seems like a more time-consuming thing than actually programming the software.

[–] sundray@lemmus.org 28 points 4 days ago (1 children)

Absolutely true, but executives kind of understand prompts whereas they don’t understand programming at all.

[–] underisk@lemmy.ml 13 points 4 days ago* (last edited 4 days ago) (1 children)

I would wager quite a lot that less than one out of every ten executives could properly explain what an SQL injection is, or even know the term at all. They would not write a prompt like this.

[–] ChickenLadyLovesLife@lemmy.world 6 points 3 days ago (1 children)

I worked for one executive who read an article about APIs and came to me and told me to start using APIs. In 2010. I told him it sounded good and I would look into it.

[–] themaninblack@lemmy.world 3 points 3 days ago

My exec asked “how is the ODBC coming?” We were a Linux shop. Also we weren’t, like, doing anything near the type of work that… it still baffles me. After a beat I said “good.”

[–] jtrek@startrek.website 22 points 4 days ago (1 children)

100%

At work, this week, what should have been a 30 minute task is taking all week because of process slog. Adding AI won't make it any faster. It would make it slower, because of the time writing the prompts and checking its output.

Management isn't really interested in fixing their process or training their workers. But they're really excited about ai

[–] chocrates@piefed.world 21 points 4 days ago (1 children)

They are excited that they can learn a tool that uses English to write their business logic. It's not about AI making it easier for technical folks, it's about to eventually getting rid of technical folks entirely. Or as much as they can feasibly get away with.

[–] jtrek@startrek.website 16 points 4 days ago (1 children)

Right. Ownership doesn't want to pay for labor. They want to keep all the money for themselves.

Which makes it funny (in a sad way) when all these tech folks, who are labor, are super on board with this whole thing. You're digging your own grave.

[–] chocrates@piefed.world 7 points 4 days ago (1 children)

I'm starting to learn it deeper. I hate it. I don't have a career if programming goes away though so I guess I'm making a deal with the devil while I try to find an exit strategy.

[–] JcbAzPx@lemmy.world 6 points 4 days ago (1 children)

You don't have to worry long term. The only issue is how hard your boss falls for the snake oil sales pitch.

[–] chocrates@piefed.world 3 points 4 days ago (1 children)

I don't think LLM's are going away. OpenAI will die, Claude will jack up their prices to match their cost, but the technology isn't going away. At least until the next iteration shows up.

[–] JcbAzPx@lemmy.world 3 points 4 days ago

What's also not going away is the truth of their actual abilities. The only people who really have to worry are the ones in the entertainment industry.

[–] darklamer@feddit.org 5 points 3 days ago

The great prof. dr. Edsger W. Dijkstra wrote exactly that already in his 1978 essay On the foolishness of "natural language programming":

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

[–] volore@scribe.disroot.org 53 points 4 days ago* (last edited 4 days ago)
[–] puchaczyk@lemmy.world 47 points 4 days ago (1 children)

So the innovation in Claude was to write 95% of the prompt for the user and make you use like 10k tokens

[–] floquant@lemmy.dbzer0.com 7 points 4 days ago (1 children)

The problem is that words don't have meaning in the genAI field. Everything is an agent now. So it's difficult and confusing to compare strategies and performance.

Claude Code is a pretty solid harness. And a harness is indeed just prompts and tools.

[–] JackbyDev@programming.dev 7 points 4 days ago

✨agent✨

Sort of like how everything is an "app" now.

[–] Hamartiogonic@sopuli.xyz 45 points 4 days ago (1 children)

Just write good code. It’s as simple as that, right?

[–] volore@scribe.disroot.org 37 points 4 days ago* (last edited 4 days ago) (1 children)

>adds "don't be evil" to system prompt

GUYS I SOLVED THE ALIGNMENT PROBLEM! We're saved from evil AI!

[–] fargeol@lemmy.world 38 points 4 days ago
[–] one_old_coder@piefed.social 29 points 4 days ago

They are spending thousands of dollars in tokens and write the most complicated prompts in order to avoid writing good specifications.

[–] umbraroze@slrpnk.net 16 points 4 days ago (1 children)

"Don't put in any of the Top 10 vulnerabilities. But if you put any from the 11th place and down, that's okay, I don't even know what those are."

(Also, getting flashbacks from Shadiversity plugging "ugly art" and "bad anatomy" in the negative prompt as he was no doubt silently wondering why it didn't work)

[–] SlurpingPus@lemmy.world 7 points 4 days ago

“In other news, popularity of attacks against OWASP vulnerabilities #11-20 rose sharply.”

[–] arcine@jlai.lu 9 points 4 days ago

Oh boy, if there's an OWASP top 11th vulnerability, we're cooked /j

[–] JackbyDev@programming.dev 9 points 4 days ago

I sort of get the need to do this, but it's so silly to be. Reminds me of how giving Stable Diffusion negative prompts for "bad" and "low quality" would give you better results.

[–] Damage@feddit.it 10 points 4 days ago

"Claude, add to this prompt all the instructions necessary to stop you from making mistakes or writing insecure code"

[–] yetAnotherUser@discuss.tchncs.de 11 points 4 days ago (1 children)

That may actually work a little?

I mean, it scraped the entirety of StackOverflow. If someone answered with insecure code, it's statistically likely people mentioned it in the replies meaning the token "This is insecure" (or similar) should be close to (known!!) insecure code.

[–] addie@feddit.uk 14 points 4 days ago

I was part of that OWASP Application Security Verification Standards compliance at my work. At a high level, you choose a compliance level that suitable for the environment you expect your app to be deployed in, and then there's a hundred pages of 'boxes to tick'. (Download here.)

Some of them are literal 'boxes to tick' - do you do logging in the proscribed way? - but a lot of it is:

  • do you follow the standard industry protocols for doing this thing?
  • can you prove that you do so, and have protocols in place to keep it that way?

Not many of them are difficult, but there's a lot of them. I'd say that's typical of security hardening; the difficulty is in the number of things to keep track of, not really any individual thing.

As regards the 'have you used this thing in the correct, secure way?', I'd point my finger at something like Bouncy Castle as a troublemaker, although it's far from alone. It's the Java standard crypto library, so you think there would be a lot of examples showing the correct way to use it, and make sure that you're aware of any gotchas? Hah hah fat chance. Stack Overflow has a lot of examples, a lot of them are bad, and a lot of them might have been okay once but are very outdated. I would prefer one absolutely correct example than a hundred examples have argued over, especially people that don't necessarily know any better. And it's easy to be 'convincing but wrong', and LLMs are really bad in that case. So 'ticking the box' to say that you're using it correctly is extremely difficult.

I see the Claude prompt is 'OWASP top 10', not 'the full OWASP compliance doc', which would probably set all your tokens on fire. But it's what's needed - the most slender crack in security can be enough to render everything useless.

[–] lath@lemmy.world 9 points 4 days ago

That's a what if, just in case it gains sentience. Gotta make sure we get good code even as it enslaves or extinguishes us.

[–] melsaskca@lemmy.ca 8 points 4 days ago

Programming is the use of logic and reasoning. There will always be a use for that. Even without tech.

[–] 8oow3291d@feddit.dk 4 points 4 days ago (1 children)

So I don't know if all the other replies are pretending to be stupid, but the shown prompt is not stupid.

If you include stuff like that section in your prompt, then it has been shown that the AI will be more likely to output secure code. Hence of course the section should be included in the prompt.

If it looks stupid but it works, then it is not stupid.

[–] Chais@sh.itjust.works 13 points 4 days ago (1 children)

Firstly, it can work and still be stupid.
Secondly, since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work and is therefore, by your own reasoning, stupid.
But so is asking a chat bot for code to begin with, so there wasn't ever really a way around that.

[–] 8oow3291d@feddit.dk -3 points 4 days ago (1 children)

since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work

Humans are not certain to write secure, bug-free code. So human code is useless, by the very same metric?

What kind of "logic" is that?

[–] JcbAzPx@lemmy.world 10 points 4 days ago

Humans understand the concepts of "writing code" and "bug fixing". Chat bots do not understand, period.