this post was submitted on 13 Mar 2026
101 points (100.0% liked)
Slop.
817 readers
421 users here now
For posting all the anonymous reactionary bullshit that you can't post anywhere else.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No bigotry of any kind, including ironic bigotry.
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target federated instances' admins or moderators.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I recently tried using an LLM to find out whether a niche issue in my thesis had already been discussed in the literature. I fed the LLM extremely specific prompts, specific enough, in fact, that it could actually cough up a result that looked similar enough to my problem that I first thought that it had actually found literature on my question. The problem: the literature either did not exist, even though the authors it was attributed to are contributors to my field, or it does exist but does not contain the answer the LLM gave. I know because I had read literally every paper the LLM spat out that actually exists. These machines are ok at some simple tasks like giving a general overview over the current literature in a field, but miserably fail anything more specific than that.
The way I think about it is: The more frequently the correct answer to a question has been given on the internet, the more reliable an LLM is to give that correct answer to that question. So it's pretty reliable on surface-level questions in a vast array of fields. But the more specific and niche you get, the less explored the topic you're asking the LLM about is, the more likely it is to just make stuff up.
Trust me, it's like this for every field; geology, programming, history, story writing, philosophy
I have made use of it, I do regularly use it, but to not acknowledge it's fucking shit and should not be put near any serious work without the up-most scrutiny is a joke
And I believe the propagators of AI lack either the skills needed to actually tell how bad it is, or want to believe otherwise because it makes it so much easier for them
I ask it questions about the godot game engine now and then and 100% of the time it will make something up that requires me to detangle its response
LLMs are a remarkable improvement on googles “I’m feeling lucky” button