this post was submitted on 12 Mar 2026
270 points (95.3% liked)

PC Gaming

14194 readers
500 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheObviousSolution@lemmy.ca 8 points 1 day ago (2 children)

No reason to hide the Claude contributions if his reasoning isn't flawed. Honestly, my biggest beef is using AI as a subscription service, there are plenty of local LLM alternatives, and that just feeds the incentive for the web crawlers currently assaulting the Internet, for the companies also tucking in surveillance and training on your use of their AI.

Honestly, it can save you having to search through wordy API documentation, as long as you bother to make sure to make sure you end up knowing the hows and whys of what it is presenting you and whether it is good programming methodology. In a lot of ways it is no different and even faster than having to search for the answers through support forums and stack overflow. It might be built upon IP theft, but unfortunately in practical terms, you will be at a disadvantage against people that use it, so you might as well use it in a way that does not give them any way to profit off of it (Local LLMs). I'd argue that the case against these applications of generative AI is way different than those for image and video generation.

The biggest problem is when developers begin to depend on it too much without learning the nuance, but it would be a lie if there aren't a lot of developers who contribute to Open Source without really bothering to familiarize with it already and who are more interested in the end goal than best practices. Not sure if this will make the problem better or worse, but devs who use AI without bothering to learn will have a hell of a time providing proper maintenance for their code.

[–] BillyTheKid2@lemmy.ca 3 points 1 day ago (1 children)

Not disagreeing with you, but Anthropic believes code is the path to AGI.

I want to be clear so somebody doesn't have a fit - I do not personally believe LLMs are capable of AGI. But this isn't about what I believe.

They believe that coding is the path because it's verifiable and a generatable. Frontier AI companies aren't training on the global internet anymore, it's poisoned with AI slop. Non-frontier AI companies do, we've all seen it. But it's my opinion that non frontier AI companies are basically all but irrelevant (I'm not talking about open source/hugging face). Anthropic knows this, and their idea (again, not mine, don't get mad at me please!) is that by training on code their AI will get better at non-coding activities as well, and if they make it good enough at coding it'll become truly intelligent in all ways.

What I'm getting at is, there's lots of good reasons to avoid using LLMs/AIs/Companies that shove ai down my throat (looking at you Microsoft- I don't fucking want copilot in my fucking notepad - if anybody from MS is reading this fuck your AI in everything and fuck your AI ridden operating system), but local LLMs are not a replacement for Opus and Anthropic isn't scraping the open internet anymore. I'm sure they did at first though.

The biggest problem is when developers begin to depend on it too much without learning the nuance I couldn't agree more. The brain is like a muscle, if you use it, it gets stronger. If you don't, it gets weaker. "Vibe" coding is using your brain at a minimum, and if all you do is vibe out slop you're not really learning much.

[–] TheObviousSolution@lemmy.ca 0 points 19 hours ago (2 children)

local LLMs are not a replacement for Opus

https://www.bitdoze.com/best-open-source-llms-claude-alternative/

Something tells me you haven't even made the effort. They are not that good, in the same way that LibreOffice is not as good as Excel. But if you are going to make the argument you quote, then you can work that brain muscle and adapt.

And they aren't training off of the Internet because they are training on your input. It's mind-boggling to me how some people are so willing to train their replacements while also paying them for the effort to do so for an advantage set very temporary in the future we are heading. A lot of your criticism doesn't even apply to local LLMs - either they are trained by model distillation from more advances models or because they are images temporally set in stone. It's also telling how implicitly willing you seem to be able to let the Internet burn, because the inevitability is becoming a corporate slave and accepting their ever increasing subscription fees which you can't ignore because "hey, they've got the most users, the Internet is too dead, your open alternatives are no replacements for us". You say you are not, but you are saying everything an AI AGI astrosurfer would be saying, and the irony of hearing this in an open source "federated" platform over something like Reddit is paramount.

[–] Evotech@lemmy.world 1 points 7 hours ago (1 children)

Sorry but it’s not even slightly comparable.

Frontier models vs whatever you can realistically host on your own that is.

[–] TheObviousSolution@lemmy.ca 1 points 6 hours ago* (last edited 6 hours ago) (1 children)

That you don't want to or aren't able to compare them doesn't mean they can't be compared. You do you, or more aptly, have an AI do you since you can't bother.

[–] Evotech@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago)

Oh I’ve tried. Don’t assume I haven’t

In terms of functionality on paper it’s similar. In terms of what they can realistically do it’s not.

[–] BillyTheKid2@lemmy.ca 1 points 12 hours ago (1 children)

I could have worded that differently, I apologize.

They aren't a replacement for somebody like me who doesn't have a screaming GPU.

Yes they train on input. I don't like it either. It's not just creepy, but I'm sure breaks privacy laws everywhere.

Regardless, you've already decided who I am so I don't see this conversation being productive.

I again apologize for not making my previous comment more straightforward.

[–] TheObviousSolution@lemmy.ca 1 points 11 hours ago* (last edited 11 hours ago)

Oh, I don't think I know who you are, I just think it's indiscernible.

They aren’t a replacement for somebody like me who doesn’t have a screaming GPU.

You can run small LLMs that are still surprisingly good purely on modern CPUs, although I'm sure that's part of the intent of trying to lock down supplies behind the bubble.

[–] chicken@lemmy.dbzer0.com 2 points 1 day ago (1 children)

It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.

It sounds like they thought it through and decided it's the best way to do the work. Removing the attributions seems like a little bit of a petty "fuck you", but so is opening a github issue just to whine about AI. Someone who is volunteering their time to make free software shouldn't have to put up with people with an ideological bone to pick who feel entitled to tell them how.

[–] TheObviousSolution@lemmy.ca 1 points 8 hours ago

Then they can ignore the issue just like plenty of other troll ones that devs ignore. He actually had to invest effort into doing what he did than just ignoring it, never mind how that Barbara Streissanded it . Honestly would have been better for the guy if he wasn't doing advertising for Claude by letting it mark the parts of the code it had coauthored, but once the cat is out of the bag you can't just take something like that back. I agree that if people really have a problem with it, then they should just fork and do it on their own time, but you have to expect public criticism if you do things in the public, otherwise don't.