this post was submitted on 08 Apr 2026
472 points (96.3% liked)
Programmer Humor
30864 readers
1020 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you know enough to verify a translation as accurate, or you have the tools to figure out an accurate translation through dictionaries or some such, then you know enough to do the translation yourself. If you don't, then I cannot trust your translation.
Quick reminder: rhyming dictionaries exist. LLMs solved a solved problem, but worse.
Once again, even if the billionaire's toxic Nazi plagiarism machine was useful, it is so morally repugnant that it should never be used, which makes it functionally useless. This is an absolute statement, but trying to "um actually" that makes you look like either a boot-licker, a pollutant, a Nazi, a plagiarist, an idiot, or some combination of those.
I would rather look like an absolutist. How about you?
Correct. But it's going to take me a lot more work and time, possibly to the point of not being feasible and probably even matching the energy cost of using the LLM over the entirety of the task.
I wouldn't. I don't know where you got that. Adding LLM-based analysis to your toolkit to spot important issues that otherwise might not have been found is just that: an addition. Not replacing anything. And it is demonstrably useful for that at this point, there's just no denying that.
My point is that if you are this confidently wrong about the capabilities of LLM-based tools, then why should I believe you to be any less wrong about the moral and ethical issues you're raising? It looks like you're either completely misinformed or deliberately fighting a strawman for a part of your argument, so it gives anyone on the other side an easy excuse to just dismiss your entire argument. That's what I'm trying to get across here.
Are you saying that you need to have perfect technical knowledge of AI to know if a person that promotes it is immoral? It looks like a non sequitur to me.
No, that's not what I'm saying. I'm saying that if someone wants their argument to be taken seriously, they should be willing to reevaluate parts of it that they're very obviously wrong about, especially if, by their own admission, those parts don't even matter in the face of the rest of the argument.
I'm just fed up with people feeling the need to have strong opinions on everything, even if they don't actually know much about it. It's fine if you don't know anything about how capable current LLMs actually are. Especially as an opponent of LLMs for moral reasons, it makes total sense that you'd just be avoiding them and thus not really be that informed. It does not in any way weaken your argument. As long as you seem to have a good grip on what you know and what you don't know, it's all good. But being confidently wrong about things and refusing to reevaluate when getting pushback on that just signals that you neither know nor care about the limits of your own knowledge, and makes the entirety of your argument untrustworthy.
Surely, the energy cost to verify the translation would be the same as translating it? If you're struggling that much, why are you translating it at all? I cannot trust your translation.
If you tell an LLM to generate reports, it will, regardless of the actual quality of the environment. It doesn't know what's secure and what isn't. All you've shown it to do is convince the kinds of security analysts with a system so insecure as to have a LOT of good reports that their system is more secure than it is. Which is useless at best, detrimental at worst.
It's useless for translation. It's useless for security analysis. It's useless for rhyming (I notice you didn't mention that one). You're trying so hard to prove how useful it is, and your failure demonstrates how useless it is.
You can't condemn confident wrongness and defend LLMs. And you can't defend the billionaire's toxic Nazi plagiarism machine while questioning someone else's morals. You can't cherry-pick my argument and claim I'm the one fighting a strawman. ...Well, not if you're arguing in good faith.