this post was submitted on 13 Mar 2026
122 points (94.9% liked)
Technology
82549 readers
2895 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What exactly is this for? I understand LLMs have there limits with understanding physical reality, but at least they have a use case of theoretically automating the "symbolic work" ie moving symbols around on a screen or piece of paper, that white collar workers do.
Yes it'll never be able to cook a meal or change a lightbulb, but neither will this without a significant enhancement in robotics to embody this AI. What's the use case? Being able to better tell you how to throw a ball then a person?
World models aren't just for robotics (though they definitely WILL be used for that). They're for reasoning under uncertainty in domains where you can't see the outcome in advance. Eg:
Medical diagnosis: you can't physically "embody" whether a treatment will work. But a system that understands disease progression, drug interactions, and physiological constraints (not by pattern-matching text, but by learning causal structure) - well, that's fundamentally different from an LLM hallucinating plausible-sounding symptoms.
Financial modeling, engineering simulations, climate prediction...all domains where the "embodied experience" is simulation, not physical interaction. You learn how the world actually works by understanding constraint and causality, not by predicting the next token in a Bloomberg article.
The point isn't "robots will finally work." The point is: understanding causality is cheaper in the long run and more reliable than memorizing correlations. Embodiment is just the training signal that forces you to learn causality instead of surface patterns.
My read is that LeCun's betting that a system trained to predict abstract state transitions in any domain (be that medical, financial, physical) will generalize better / hallucinate less than one trained to predict text.
Whether that's true? Fucked if I know - that's why it's (literally) the billion-dollar question. If he cracks it....it's big.
But "it won't cook dinner" misses the point (and besides which, it might actually cook dinner and change lightbulbs, so....)