this post was submitted on 12 Mar 2026
270 points (95.3% liked)

PC Gaming

14194 readers
506 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BillyTheKid2@lemmy.ca 3 points 1 day ago (1 children)

Not disagreeing with you, but Anthropic believes code is the path to AGI.

I want to be clear so somebody doesn't have a fit - I do not personally believe LLMs are capable of AGI. But this isn't about what I believe.

They believe that coding is the path because it's verifiable and a generatable. Frontier AI companies aren't training on the global internet anymore, it's poisoned with AI slop. Non-frontier AI companies do, we've all seen it. But it's my opinion that non frontier AI companies are basically all but irrelevant (I'm not talking about open source/hugging face). Anthropic knows this, and their idea (again, not mine, don't get mad at me please!) is that by training on code their AI will get better at non-coding activities as well, and if they make it good enough at coding it'll become truly intelligent in all ways.

What I'm getting at is, there's lots of good reasons to avoid using LLMs/AIs/Companies that shove ai down my throat (looking at you Microsoft- I don't fucking want copilot in my fucking notepad - if anybody from MS is reading this fuck your AI in everything and fuck your AI ridden operating system), but local LLMs are not a replacement for Opus and Anthropic isn't scraping the open internet anymore. I'm sure they did at first though.

The biggest problem is when developers begin to depend on it too much without learning the nuance I couldn't agree more. The brain is like a muscle, if you use it, it gets stronger. If you don't, it gets weaker. "Vibe" coding is using your brain at a minimum, and if all you do is vibe out slop you're not really learning much.

[–] TheObviousSolution@lemmy.ca 0 points 20 hours ago (2 children)

local LLMs are not a replacement for Opus

https://www.bitdoze.com/best-open-source-llms-claude-alternative/

Something tells me you haven't even made the effort. They are not that good, in the same way that LibreOffice is not as good as Excel. But if you are going to make the argument you quote, then you can work that brain muscle and adapt.

And they aren't training off of the Internet because they are training on your input. It's mind-boggling to me how some people are so willing to train their replacements while also paying them for the effort to do so for an advantage set very temporary in the future we are heading. A lot of your criticism doesn't even apply to local LLMs - either they are trained by model distillation from more advances models or because they are images temporally set in stone. It's also telling how implicitly willing you seem to be able to let the Internet burn, because the inevitability is becoming a corporate slave and accepting their ever increasing subscription fees which you can't ignore because "hey, they've got the most users, the Internet is too dead, your open alternatives are no replacements for us". You say you are not, but you are saying everything an AI AGI astrosurfer would be saying, and the irony of hearing this in an open source "federated" platform over something like Reddit is paramount.

[–] Evotech@lemmy.world 1 points 8 hours ago (1 children)

Sorry but it’s not even slightly comparable.

Frontier models vs whatever you can realistically host on your own that is.

[–] TheObviousSolution@lemmy.ca 1 points 8 hours ago* (last edited 8 hours ago) (1 children)

That you don't want to or aren't able to compare them doesn't mean they can't be compared. You do you, or more aptly, have an AI do you since you can't bother.

[–] Evotech@lemmy.world 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

Oh I’ve tried. Don’t assume I haven’t

In terms of functionality on paper it’s similar. In terms of what they can realistically do it’s not.

[–] TheObviousSolution@lemmy.ca 1 points 56 minutes ago

In other words, it is a task an AI is better at you than.

[–] BillyTheKid2@lemmy.ca 1 points 14 hours ago (1 children)

I could have worded that differently, I apologize.

They aren't a replacement for somebody like me who doesn't have a screaming GPU.

Yes they train on input. I don't like it either. It's not just creepy, but I'm sure breaks privacy laws everywhere.

Regardless, you've already decided who I am so I don't see this conversation being productive.

I again apologize for not making my previous comment more straightforward.

[–] TheObviousSolution@lemmy.ca 1 points 13 hours ago* (last edited 13 hours ago)

Oh, I don't think I know who you are, I just think it's indiscernible.

They aren’t a replacement for somebody like me who doesn’t have a screaming GPU.

You can run small LLMs that are still surprisingly good purely on modern CPUs, although I'm sure that's part of the intent of trying to lock down supplies behind the bubble.