this post was submitted on 12 Mar 2026
270 points (95.3% liked)
PC Gaming
14194 readers
506 users here now
For PC gaming news and discussion.
PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates.
(Submissions should be from the original source if possible, unless from paywalled or non-english sources.
If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
Not disagreeing with you, but Anthropic believes code is the path to AGI.
I want to be clear so somebody doesn't have a fit - I do not personally believe LLMs are capable of AGI. But this isn't about what I believe.
They believe that coding is the path because it's verifiable and a generatable. Frontier AI companies aren't training on the global internet anymore, it's poisoned with AI slop. Non-frontier AI companies do, we've all seen it. But it's my opinion that non frontier AI companies are basically all but irrelevant (I'm not talking about open source/hugging face). Anthropic knows this, and their idea (again, not mine, don't get mad at me please!) is that by training on code their AI will get better at non-coding activities as well, and if they make it good enough at coding it'll become truly intelligent in all ways.
What I'm getting at is, there's lots of good reasons to avoid using LLMs/AIs/Companies that shove ai down my throat (looking at you Microsoft- I don't fucking want copilot in my fucking notepad - if anybody from MS is reading this fuck your AI in everything and fuck your AI ridden operating system), but local LLMs are not a replacement for Opus and Anthropic isn't scraping the open internet anymore. I'm sure they did at first though.
https://www.bitdoze.com/best-open-source-llms-claude-alternative/
Something tells me you haven't even made the effort. They are not that good, in the same way that LibreOffice is not as good as Excel. But if you are going to make the argument you quote, then you can work that brain muscle and adapt.
And they aren't training off of the Internet because they are training on your input. It's mind-boggling to me how some people are so willing to train their replacements while also paying them for the effort to do so for an advantage set very temporary in the future we are heading. A lot of your criticism doesn't even apply to local LLMs - either they are trained by model distillation from more advances models or because they are images temporally set in stone. It's also telling how implicitly willing you seem to be able to let the Internet burn, because the inevitability is becoming a corporate slave and accepting their ever increasing subscription fees which you can't ignore because "hey, they've got the most users, the Internet is too dead, your open alternatives are no replacements for us". You say you are not, but you are saying everything an AI AGI astrosurfer would be saying, and the irony of hearing this in an open source "federated" platform over something like Reddit is paramount.
Sorry but it’s not even slightly comparable.
Frontier models vs whatever you can realistically host on your own that is.
That you don't want to or aren't able to compare them doesn't mean they can't be compared. You do you, or more aptly, have an AI do you since you can't bother.
Oh I’ve tried. Don’t assume I haven’t
In terms of functionality on paper it’s similar. In terms of what they can realistically do it’s not.
In other words, it is a task an AI is better at you than.
I could have worded that differently, I apologize.
They aren't a replacement for somebody like me who doesn't have a screaming GPU.
Yes they train on input. I don't like it either. It's not just creepy, but I'm sure breaks privacy laws everywhere.
Regardless, you've already decided who I am so I don't see this conversation being productive.
I again apologize for not making my previous comment more straightforward.
Oh, I don't think I know who you are, I just think it's indiscernible.
You can run small LLMs that are still surprisingly good purely on modern CPUs, although I'm sure that's part of the intent of trying to lock down supplies behind the bubble.