this post was submitted on 11 Mar 2026
55 points (98.2% liked)
Technology
82518 readers
3942 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
GLM 4.5 is from August. Isn’t the real tl;dr that a seven month old open model, which was behind proprietary models at the time, did better than most humans would?
The task described in this article is asking questions about a document that was provided to the llm in the context.
I would hope that if you give a human a text and ask them to cite facts from it they would do better than 99% correct.
Also, when the tokens exceeded 200k, the llm error rate was higher than 10%
That’s literally what school exams are about, isn’t it?
Token window is a problem for all llms though, that’s not easily solved, but it can be worked around to a certain extent.