Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
Disagree.
In fact, there are signs that extensive "user preference" training is deep frying models, so they score better in settings like LM Arena but get worse at actual work. See: ChatGPT 5.2, Gemini 2.5 Experimental before it regressed at release, Mistral's latest deepseek-arch release, Qwen3's reduction in world knowledge vs 2.5, and so on.
And also, they don't train themselves, they don't learn on the go; that's all manually done.
No, but... The execs are drinking a lot of Kool-aid, at least going from their public statements and behavior. Zuckerburg, for example, has completely gutted his best AI team, the literal inventor of modern open LLM infrastructure, for a bunch of tech bros with egos bigger than their contributions. OpenAI keeps making desperate short-term decisions instead of (seemingly) investing in architectural experimentation, giving everyone an easy chance to catch them. Google and Meta are poisoning their absolute best data sources, and it's already starting to bite Meta.
Honestly, I don't know what they're thinking.
...I think the bubble will drag on for some time.
But I'm a massive local LLM advocate, and I'm telling you: it's a bubble. These transformers(ish) LLMs are useful tools, not human replacements that will scale infinitely. That last bit is a scam.
They are thinking that if you get the hype train fast enough, nothing will slow it down and by the time everyone realizes it's all bullshit, you already have your third yacht and have cashed out.
US tech is driven by Boomer investors with too much money, too much greed, and too little education.