this post was submitted on 08 Jun 2025
825 points (95.4% liked)

Technology

71240 readers
4568 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

LOOK MAA I AM ON FRONT PAGE

top 50 comments
sorted by: hot top controversial new old
[–] FourWaveforms@lemm.ee 2 points 1 day ago* (last edited 1 day ago)

WTF does the author think reasoning is

[–] SoftestSapphic@lemmy.world 97 points 2 days ago (5 children)

Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

[–] zbk@lemmy.ca 22 points 2 days ago

This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

[–] aidan@lemmy.world 2 points 1 day ago

And engineers who stood to make a lot of money

load more comments (3 replies)
[–] billwashere@lemmy.world 49 points 2 days ago (12 children)

When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

[–] x0x7@lemmy.world 9 points 2 days ago* (last edited 2 days ago) (1 children)

Intuition is about the only thing it has. It's a statistical system. The problem is it doesn't have logic. We assume because its computer based that it must be more logic oriented but it's the opposite. That's the problem. We can't get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn't mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn't guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don't and we can't get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

[–] Slaxis@discuss.tchncs.de 4 points 1 day ago

You had a compelling description of how ML models work and just had to swerve into politics, huh?

load more comments (11 replies)
[–] technocrit@lemmy.dbzer0.com 29 points 2 days ago* (last edited 2 days ago) (1 children)

Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

[–] x0x7@lemmy.world 8 points 2 days ago* (last edited 2 days ago) (1 children)

Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I've given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

[–] I_Has_A_Hat@lemmy.world 5 points 2 days ago* (last edited 2 days ago) (1 children)

By many definitions, reasoning IS just a form of pattern recognition so the lines are definitely blurred.

load more comments (1 replies)
[–] Mniot@programming.dev 42 points 2 days ago

I don't think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called "complex") puzzles. Like Towers of Hanoi but with 25 discs.

The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don't have an answer for why this is, but they suspect that the reasoning doesn't scale.

[–] minoscopede@lemmy.world 66 points 2 days ago* (last edited 2 days ago) (13 children)

I see a lot of misunderstandings in the comments 🫤

This is a pretty important finding for researchers, and it's not obvious by any means. This finding is not showing a problem with LLMs' abilities in general. The issue they discovered is specifically for so-called "reasoning models" that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that's a flaw that needs to be corrected before models can actually reason.

[–] Knock_Knock_Lemmy_In@lemmy.world 16 points 2 days ago (5 children)

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

load more comments (5 replies)
[–] REDACTED@infosec.pub 12 points 2 days ago* (last edited 2 days ago) (4 children)

What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it's no longer reasoning? I feel like at this point a more relevant question is "What exactly is reasoning?". Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

https://en.wikipedia.org/wiki/Reasoning_system

load more comments (4 replies)
[–] theherk@lemmy.world 15 points 2 days ago

Yeah these comments have the three hallmarks of Lemmy:

  • AI is just autocomplete mantras.
  • Apple is always synonymous with bad and dumb.
  • Rare pockets of really thoughtful comments.

Thanks for being at least the latter.

[–] technocrit@lemmy.dbzer0.com 6 points 2 days ago* (last edited 2 days ago)

There's probably alot of misunderstanding because these grifters intentionally use misleading language: AI, reasoning, etc.

If they stuck to scientifically descriptive terms, it would be much more clear and much less sensational.

load more comments (9 replies)
[–] burgerpocalyse@lemmy.world 3 points 1 day ago

hey I cant recognize patterns so theyre smarter than me at least

[–] Nanook@lemm.ee 229 points 3 days ago (54 children)

lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

[–] MNByChoice@midwest.social 77 points 3 days ago (1 children)

The "Apple" part. CEOs only care what companies say.

[–] kadup@lemmy.world 51 points 3 days ago (5 children)

Apple is significantly behind and arrived late to the whole AI hype, so of course it's in their absolute best interest to keep showing how LLMs aren't special or amazingly revolutionary.

They're not wrong, but the motivation is also pretty clear.

load more comments (5 replies)
load more comments (53 replies)
[–] mavu@discuss.tchncs.de 58 points 3 days ago

No way!

Statistical Language models don't reason?

But OpenAI, robots taking over!

[–] skisnow@lemmy.ca 26 points 2 days ago (1 children)

What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.

load more comments (1 replies)
[–] FreakinSteve@lemmy.world 20 points 2 days ago (4 children)

NOOOOOOOOO

SHIIIIIIIIIITT

SHEEERRRLOOOOOOCK

load more comments (4 replies)
[–] melsaskca@lemmy.ca 9 points 2 days ago (1 children)

It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".

load more comments (1 replies)
[–] RampantParanoia2365@lemmy.world 18 points 2 days ago* (last edited 2 days ago) (2 children)

Fucking obviously. Until Data's positronic brains becomes reality, AI is not actual intelligence.

AI is not A I. I should make that a tshirt.

load more comments (2 replies)
[–] Jhex@lemmy.world 49 points 3 days ago (1 children)

this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

load more comments (1 replies)
[–] Harbinger01173430@lemmy.world 8 points 2 days ago

XD so, like a regular school/university student that just wants to get passing grades?

[–] bjoern_tantau@swg-empire.de 36 points 3 days ago* (last edited 1 day ago)
[–] brsrklf@jlai.lu 47 points 3 days ago (2 children)

You know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...

But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.

load more comments (2 replies)
load more comments
view more: next ›