this post was submitted on 29 Apr 2026
753 points (99.6% liked)
linuxmemes
31326 readers
309 users here now
Hint: :q!
Sister communities:
Community rules (click to expand)
1. Follow the site-wide rules
- Instance-wide TOS: https://legal.lemmy.world/tos/
- Lemmy code of conduct: https://join-lemmy.org/docs/code_of_conduct.html
2. Be civil
- Understand the difference between a joke and an insult.
- Do not harrass or attack users for any reason. This includes using blanket terms, like "every user of thing".
- Don't get baited into back-and-forth insults. We are not animals.
- Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
- Bigotry will not be tolerated.
3. Post Linux-related content
- Including Unix and BSD.
- Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of
sudoin Windows. - No porn, no politics, no trolling or ragebaiting.
- Don't come looking for advice, this is not the right community.
4. No recent reposts
- Everybody uses Arch btw, can't quit Vim, <loves/tolerates/hates> systemd, and wants to interject for a moment. You can stop now.
5. π¬π§ Language/ΡΠ·ΡΠΊ/Sprache
- This is primarily an English-speaking community. π¬π§π¦πΊπΊπΈ
- Comments written in other languages are allowed.
- The substance of a post should be comprehensible for people who only speak English.
- Titles and post bodies written in other languages will be allowed, but only as long as the above rule is observed.
6. (NEW!) Regarding public figures
We all have our opinions, and certain public figures can be divisive. Keep in mind that this is a community for memes and light-hearted fun, not for airing grievances or leveling accusations. - Keep discussions polite and free of disparagement.
- We are never in possession of all of the facts. Defamatory comments will not be tolerated.
- Discussions that get too heated will be locked and offending comments removed. Β
Please report posts and comments that break these rules!
Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't remove France.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sadly, it seems like they're going to be pro AI internally: https://discuss.kde.org/t/sorry-to-bring-up-a-contentious-topic-kde-ai-llm-policy/46333 (If you jump in to comment, please try to be constructive rather than full of rage.)
There are appropriate and legitimate use cases for AI, especially when locally hosted. Tech/programming is one of the few. The problem is when its shoved in everyones face for everything and all the data goes to tech conglomerates
Some of us respectfully disagree with LLMs for programming being "appropriate and legitimate", at least if that involves generating code and not just locating bugs.
Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.
Locating bugs is one of the most important tasks in programming, and if devs can't do that, not are willing to learn to do so, they are fucked.
There's no other way of saying it. Can't wait for the AI bubble to pop.
LLMs can sometimes point out potential trouble spots, which is also one of the uses that doesn't necessarily inject problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn't seem the type of use KDE is currently limiting themselves to.
You are using current AI as your baseline. There will come a point where writing code will mean there being zero bugs or vulnerabilities. Humans cannot do that. AI will, whether we want it or not, one day be able to. Idk if we are talk 10 years or 40 years, but it will happen.
LOL at that.
LLMs need to disappear before that happens.
In order to not have any bugs, and for anything to produce perfect software, you need to define perfect business rules, and if managers could do that, they wouldn't have needed developers for decades.
If we have AI that can produce the perfect code, you won't have access to it. Why giving everyone something so powerful when now you can circle around everyone easily?
If one company can make it, then other will make it too. Someone will be the first, but others will follow behind. It is too critical for each countries national security to not research it themselves, let alone the profit the companies can make. It will definitely be longer before someone like me will get access, and even longer before it is cost effective, but it will eventually happen.
I should have been clearer. I meant exploitable vulnerabilities in the software. "Bugs" and "features" can have an overlap, but that's not what I meant. The only attack surface left would be the human one, which would still be a massive vulnerability like it currently is.
That's not how anything works.
You are assuming a god-like coder entity which can consider everything, and that's a whole new problem which we can't solve right now.
And if it's a national security, it won't be shared with others, so if one country stumbles upon it, others won't know how.
agreed, or even using something like Adobe Firefly.(it only trains on Public domain images)
I rly? Hm
atleast it claims its "Ethical" by only training on public domain images.
Damm, that is a long thread, I spent like 25 minutes reading it and only got half way through it
Given the nature of this controversial subject are you honestly surprised?
Yeah, not really.
Especially because this is FOSS. We love our insanely long discussions about formalities.
Yeah, I've seen much longer threads about things much less consequential. People like to argue online. More news at 11.