this post was submitted on 29 Apr 2026
753 points (99.6% liked)

linuxmemes

31326 readers
708 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack users for any reason. This includes using blanket terms, like "every user of thing".
  • Don't get baited into back-and-forth insults. We are not animals.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn, no politics, no trolling or ragebaiting.
  • Don't come looking for advice, this is not the right community.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, <loves/tolerates/hates> systemd, and wants to interject for a moment. You can stop now.
  • 5. πŸ‡¬πŸ‡§ Language/язык/Sprache
  • This is primarily an English-speaking community. πŸ‡¬πŸ‡§πŸ‡¦πŸ‡ΊπŸ‡ΊπŸ‡Έ
  • Comments written in other languages are allowed.
  • The substance of a post should be comprehensible for people who only speak English.
  • Titles and post bodies written in other languages will be allowed, but only as long as the above rule is observed.
  • 6. (NEW!) Regarding public figuresWe all have our opinions, and certain public figures can be divisive. Keep in mind that this is a community for memes and light-hearted fun, not for airing grievances or leveling accusations.
  • Keep discussions polite and free of disparagement.
  • We are never in possession of all of the facts. Defamatory comments will not be tolerated.
  • Discussions that get too heated will be locked and offending comments removed.
  • Β 

    Please report posts and comments that break these rules!


    Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't remove France.

    founded 2 years ago
    MODERATORS
    753
    KDE did a funny (lemmy.blahaj.zone)
    submitted 1 week ago* (last edited 1 week ago) by not_IO@lemmy.blahaj.zone to c/linuxmemes@lemmy.world
    you are viewing a single comment's thread
    view the rest of the comments
    [–] Bazoogle@lemmy.world 16 points 1 week ago (2 children)

    There are appropriate and legitimate use cases for AI, especially when locally hosted. Tech/programming is one of the few. The problem is when its shoved in everyones face for everything and all the data goes to tech conglomerates

    [–] ell1e@leminal.space 12 points 1 week ago* (last edited 1 week ago) (1 children)

    Some of us respectfully disagree with LLMs for programming being "appropriate and legitimate", at least if that involves generating code and not just locating bugs.

    Local LLMs retain significant issues like the one shown in this clip: https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567 Unless your model uses 100% properly licensed training data which no code LLM I have found appears to be doing.

    [–] msage@programming.dev 2 points 1 week ago (2 children)

    Locating bugs is one of the most important tasks in programming, and if devs can't do that, not are willing to learn to do so, they are fucked.

    There's no other way of saying it. Can't wait for the AI bubble to pop.

    [–] ell1e@leminal.space 2 points 1 week ago

    LLMs can sometimes point out potential trouble spots, which is also one of the uses that doesn't necessarily inject problematic code (if the LLM is prevented from suggesting a fix). But sadly, that doesn't seem the type of use KDE is currently limiting themselves to.

    [–] Bazoogle@lemmy.world 1 points 1 week ago (1 children)

    You are using current AI as your baseline. There will come a point where writing code will mean there being zero bugs or vulnerabilities. Humans cannot do that. AI will, whether we want it or not, one day be able to. Idk if we are talk 10 years or 40 years, but it will happen.

    [–] msage@programming.dev 2 points 6 days ago (1 children)

    LOL at that.

    LLMs need to disappear before that happens.

    In order to not have any bugs, and for anything to produce perfect software, you need to define perfect business rules, and if managers could do that, they wouldn't have needed developers for decades.

    If we have AI that can produce the perfect code, you won't have access to it. Why giving everyone something so powerful when now you can circle around everyone easily?

    [–] Bazoogle@lemmy.world 1 points 4 days ago (1 children)

    If we have AI that can produce the perfect code, you won’t have access to it.

    If one company can make it, then other will make it too. Someone will be the first, but others will follow behind. It is too critical for each countries national security to not research it themselves, let alone the profit the companies can make. It will definitely be longer before someone like me will get access, and even longer before it is cost effective, but it will eventually happen.

    In order to not have any bugs

    I should have been clearer. I meant exploitable vulnerabilities in the software. "Bugs" and "features" can have an overlap, but that's not what I meant. The only attack surface left would be the human one, which would still be a massive vulnerability like it currently is.

    [–] msage@programming.dev 1 points 3 days ago

    That's not how anything works.

    You are assuming a god-like coder entity which can consider everything, and that's a whole new problem which we can't solve right now.

    And if it's a national security, it won't be shared with others, so if one country stumbles upon it, others won't know how.

    [–] Mwa@thelemmy.club 1 points 1 week ago* (last edited 1 week ago) (1 children)

    agreed, or even using something like Adobe Firefly.(it only trains on Public domain images)

    [–] grrgyle@slrpnk.net 1 points 1 week ago (1 children)
    [–] Mwa@thelemmy.club 1 points 1 week ago

    atleast it claims its "Ethical" by only training on public domain images.