this post was submitted on 12 May 2026
592 points (98.4% liked)

Facepalm

3630 readers
515 users here now

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] glimse@lemmy.world 1 points 1 day ago (1 children)

I thought my first comment would make it clear that I'm very much not pro-AI so I'm not sure why I'm getting a lecture on anthropomorphizing it.

But there's still a lot wrong with your comment. You're assuming that the data set never change and the parameters are never tweaked which is wildly untrue. Answers like this are not a common occurrence anymore because when a new one pops up, companies have a vested interest in updating the system instructions.

I'm not saying the summaries are good now. Just that most of the outrageous answers have long since been fixed

[–] Tippy@sh.itjust.works 3 points 1 day ago (1 children)

It wasn't a lecture, nor was it a personal attack to you. Your comment didn't anthropomorphize LLMs, so I'm not sure how you interpreted that as me coming at you. The only place we disagree on this topic based on what we've each commented so far is that misinformation is inherently a byproduct of how LLMs currently function.

You may not be as neutral on this topic as you claim if a response like mine felt offensive. It was a fairly predictable counter argument, and I'm not even the only one who made it in the replies.

[–] glimse@lemmy.world 3 points 23 hours ago

Well, you did reply to me. I didn't realize you turned to the audience for the second paragraph