this post was submitted on 26 Jun 2025
152 points (100.0% liked)
Slop.
540 readers
690 users here now
For posting all the anonymous reactionary bullshit that you can't post anywhere else.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No bigotry of any kind, including ironic bigotry.
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: Do not post public figures, these should be posted to c/gossip
founded 7 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Losing the argument, too. Gotta hand it to the Grok team in one way though, the model does seem to stand its ground. Some of the other ones will just be like "you're absolutely right!" and then give you the answer you want
I thinks it's more that to have a Grok that would outright agree with this BS would mean they would have to create for all accounts a complete moron of a LLM algorithm that just parrots out affirmations to everyone and agreeing to everything instead of having any sense of a logical "core" to it I guess.
LLMs don't have any sort of logical core to them really.. At least not in the sense that humans do. The causality doesn't matter as much as the structure of the response, if I'm describing this right. Like a response that sounds right and a response that is right are the same thing, the LLM doesn't differentiate. So I think what the grok team must have done is added some system prompts or trained the model in such a way that it is strongly instructed to weigh its responses favoring things like news articles and wikipedia and whatever else over what the user is telling it or asking it.
Ah so it's more or less biased to what acceptable media it can consume and so is likely at best centrist within it's perspective given they likely blacklist certain sources. So what is stopping Grok from doing the hallucinatory or fabricated responses that were a big issue with other LLMs
I'm just guessing but likely they are training or instructing it in such a way that it will defer to sources that it finds through searching the internet. I guess the first thing it does when you ask a question is it searches the internet for recent news articles and other sources and now you have the context full of "facts" that it will stick to. Other LLMs haven't really done that by default (although now I think they are doing that more) so they would just give answers purely on their weights which is basically the entire internet compressed down to 150 GB or whatever.
chinese room time