this post was submitted on 23 Feb 2026
57 points (83.5% liked)
Technology
81869 readers
5289 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To belabor the chess analogy: I would say a chessbot didn't work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You'd apparently say it was working because it technically changed the board.
Literally nobody is saying the token predictor isn't predicting token. It's just predicting wrong token, which normal people call "not working," while tech evangelists prefer to call it "hallucination" or "misalignment" depending on the narrative they're aiming for.
The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it's saying, it's working - even if the content of what it says is factually inaccurate.
Accuracy is the only thing people want, and the only thing AI companies talk about. The text has already legible, and it's been that way for years. I think you're alone on your quest to lower the bar for the word "works"