this post was submitted on 11 Jun 2025
509 points (96.2% liked)
Microblog Memes
10885 readers
1138 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.
Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.
It is not consistently usable for coding. If you are hoping this slop-producing machine is consistently useful for anything then you are sorely mistaken. These things are most suitable for applications where unreliability is acceptable.
Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it's not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.
Google did this intentionally as part of enshittification.
So read and learn.
Fair enough. It's not going to get better because the fundamental problem is AI as represented by, say, ChatGPT doesn't know anything. It has no understanding of anything it's "saying". Therefore, any results derived from ChatGPT or equivalent, will need to be double-checked in any serious endeavor. So, yes it can poop out a legal brief in two seconds but it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. That, the core of it, will never get better. It might get faster. It might "sound" "more human". But it won't get better.
Well tell that to the half a million people laid off in the last couple of years. Damage is done. Also, the bubble is still growing, and if you haven't noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.
Well, yes. Every company which has chosen to promote and focus on AI has done so intentionally. That doesn't mean it's good. If AI wasn't the all-hype vaporware it is, this wouldn't have been an option. If OpenAI had been honest about it and said "it's very interesting and we're still working on it" instead of "it's absolutely going to change the world in six months" this wouldn't be the unusable shitpile it is.
I don't think we disagree that much.
But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.
Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.
No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard -- but I don't believe for a second that Google couldn't have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn't fix it. If LLMs hadn't been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).