this post was submitted on 11 Mar 2026
127 points (98.5% liked)
Technology
82669 readers
2539 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Okay, this is looking promising. In terms of the most important qualifications being plainly stated in the opening line.
Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions on specific documentation. So of course the “in the wild” questions will see a higher rate.
This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information.
Very interesting!
As I mentioned elsewhere (below) I am currently conducting similar testing across 4 different 4B models (Qwen3-4B Hivemind, Qwen3-4B-2507-Instruct, Phi-4-mini, Granite-4-3B-micro), using both grounded and ungrounded conditions. Aiming for 10,000 runs, currently at 3,500.
Not to count chickens before they hatch - but at ctx 8192, hallucination flags in the grounded condition are trending toward near-zero across the models tested (so far). If that holds across the full campaign, useful to know. If it doesn't hold, also useful to know.
I have an idea for how to make grounded state even more useful. Again, chickens not hatched blah blah. I'll share what I find here if there's interest. I'm intending to submit the whole shooting match for peer review (TMLR or JMLR) and put it on arXiv for others to poke at.
I realize this is peak "fine, I'll do it myself" energy after getting sick of ChatGPT's bullshit, but I got sick of ChatGPT's bullshit and wanted to try something to ameliorate it.
I have been saying this for awhile. I am sorta hoping we see open source llms that are trained on a curated list of literature. its funny that these came out and it seemed like the makers did not take the long known garbage in - garbage out into account.