this post was submitted on 07 Apr 2026
289 points (99.3% liked)

Fuck AI

6678 readers
1802 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BluesF@lemmy.world 1 points 1 day ago (1 children)

I'm not talking about numerical data, the way LLMs work is to find a "most likely response" based on the input text. There is absolutely maths happening inside the model, how else do you think they work? I'm not saying they take numbers and find an average.

[–] GreenKnight23@lemmy.world 1 points 19 hours ago (1 children)

LLMs are trained on language based content. it doesn't know how to extract answers from mathematical based problems. it only gives approximations based on model input. it also can be trained wrong based on user input of data.

to a purely mathematical logical operator 2+2=4.

to a LLM if told 2+2=9 it will then always respond with 2+2=9.

1000003363

LLMs don't count because they can't count. without the ability to count it can never understand the proof behind mathematical formulas.

[–] BluesF@lemmy.world 1 points 34 minutes ago

Yes, I understand that, you are not understanding what I'm describing. I am not talking about taking an average of numerical data. LLMs take something that can be thought of as an "average" of text. It says "given all the text I have seen, and this new text input, what's the most likely output?" In some numerical contexts the expected value is also an average, LLMs find a similar result, and that is what I am drawing a parallel between here.