this post was submitted on 05 May 2026
107 points (98.2% liked)

Ask Lemmy

39537 readers
1091 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

What's a common "fact" that's spread around that's actually not true and pisses you off that too many people believe it?

you are viewing a single comment's thread
view the rest of the comments
[–] Iconoclast@feddit.uk 31 points 1 week ago (5 children)

"LLMs are not AI"

Artificial intelligence is a term used in computer science to describe a system capable of performing any cognitive tasks that would normally require human intelligence - like generating natural-sounding language. The issue isn't that the term is being used incorrectly, but rather that most people think it means more than it actually does. It's a broad term that covers everything from old Atari chess engines to artificial superintelligence.

[–] TachyonTele@piefed.social 25 points 1 week ago (2 children)

The problem is people think llm AI means it's thinking, when it's obviously not. Thus: "llms are not ai" is said so people will hopefully stop thinking the llms are thinking.

[–] Grail@multiverse.soulism.net -5 points 1 week ago (1 children)

That's weird because I have a calculator that can think, so AI should be able to think too.

[–] TachyonTele@piefed.social 13 points 1 week ago (2 children)

That's not thinking. That's calculating. It doesn't have any thoughts about your math problems.

[–] gdog05@lemmy.world 13 points 1 week ago (2 children)

It doesn't have any thoughts about your math problems.

You say that but I feel judged sometimes.

[–] TachyonTele@piefed.social 6 points 1 week ago

I'm math intolerant. It destroys my stomach

[–] blarghly@lemmy.world 3 points 1 week ago (1 children)

Really? You're asking what's 6x7? Didn't you learn that in elementary school?

[–] Nemo@slrpnk.net 2 points 1 week ago

That's about when I first read the Hitchhiker Trilogy, so, yeah.

[–] Grail@multiverse.soulism.net 2 points 1 week ago (1 children)

I can see what it's thinking right there on the screen. It thinks that 6x7=42

[–] TachyonTele@piefed.social 3 points 1 week ago (1 children)

Touche
I'll give you that one lol

[–] Grail@multiverse.soulism.net 3 points 1 week ago (1 children)

If you'll allow Me to drop the quips and get more philosophical, I believe that thinking is just a word for processing data. It's obvious to Me that you disagree, but I don't understand why. Your idea of thought seems a little more metaphysical or perhaps even spiritual than Mine.

The obvious assumption I could make is that you believe thinking has internality and data processing doesn't. But if that's the case, then you don't really have any proof for your beliefs, because we can't ask calculators if their data processing is accompanied by an internal experience. And that's why it seems to Me that your assertions are unprovable and thus essentially religious in character.

[–] TachyonTele@piefed.social 2 points 1 week ago* (last edited 1 week ago) (1 children)

I like it!
I apologize about the spelling, im still on my cup of coffee. I attribute thinking with inwardness, yes. Conciseness is a completely unknown state. No one knows how it works, why it works, what it works in, etc. its a block box.

All we know is that we have conciseness. I belive most animals have conciseness, and thus can think. Insects and amoeba, small life forms, have sentience. Sentience is the ability to react to the environment and stimulus, but is unable to think and have conciseness like humans do.

Inorganic objects do not have either of those. You can't imagine what its like to "be" a rock.They simply are just matter. Computers fall into this category. Computers follow the 1s and 0s, and exacute those instructions. They don't consider what they're doing. They don't ponder on why you're asking or try things on thier own. They are as sentient as a screwdriver.

[–] Grail@multiverse.soulism.net 1 points 1 week ago (1 children)

Yeah, that's what I kinda guessed. You're just assuming they don't have internality based on vibes. Your beliefs aren't falsifiable, they can't be empirically tested. This is religion, not science.

[–] TachyonTele@piefed.social 1 points 1 week ago (1 children)

No one can test it. You're also calling good science "religion".

[–] Grail@multiverse.soulism.net 1 points 1 week ago (1 children)

The problem is people think llm AI means it's thinking, when it's obviously not

I don't think it's obvious. I think it's dogmatic. You've got your religious views on AI, and you're telling other people they're the obvious truth, but you have no evidence to back them up, it's just vibes.

[–] TachyonTele@piefed.social 1 points 1 week ago (1 children)

Where is your evidence that they do think? Or are those just your vibes?

I thought we were going to go back and forth with ideas, not shut everything down because you don't like the answers

[–] Grail@multiverse.soulism.net 1 points 1 week ago (1 children)

I'm a skeptic, My position is caution. I think we should advance our science to the point where we have empirical answers to these questions before we use AI for labour. I think it's reckless and irresponsible to use a technology when we don't understand its ethical consequences.

[–] TachyonTele@piefed.social 1 points 1 week ago (1 children)

Ok, but where is your evidence that machines can think?

[–] Grail@multiverse.soulism.net 1 points 1 week ago (1 children)

They can process data, and I believe thinking is just a word for processing data.

[–] TachyonTele@piefed.social 1 points 1 week ago

Well you're incorrect. But alright, good to know

[–] vrek@programming.dev 5 points 1 week ago (1 children)

Marketing and pr pressure to be able to use the term "Ai" because it's the current hype. Everything is now Ai. It's now a meaningless term. Image processing, data calculations, language interpretation, language generation, all claim to be Ai. If your product has Ai it now tells me nothing about what it does.

[–] Iconoclast@feddit.uk 0 points 1 week ago (1 children)

Marketing only calls everything AI because that's the only term people recognize. ChatGPT is AI, yes, but it's an Large Language Model to be specific. Dall-E is also AI but the more accurate term is Diffusion Model. There's just no point in using these terms in marketing because 90% of people would have no idea what you're talking about.

When people say that LLMs are not AI they usually mean that LLMs are not generally intelligent (AGI) which is true, but they do still count as an AI.

[–] vrek@programming.dev 2 points 1 week ago

Exactly but so many people form strong opinions and expectations because the say "Ai" but it could mean so many things.

[–] Zacryon@feddit.org 2 points 1 week ago

Minor corrections: AI does not just comprise methods for tasks that require 'cognition'. Let's rather use the more general "information processing". Nor is it restricted to "normally requires humans". Think of swarm intelligence methods for example, like ant colony optimization.

There is an inherent issue in the definition of the word "intelligence" though. For labelling a bunch of methods, that's not as problematic, we could call all that 'banana milkshake' as long as we agree upon what we put into that category.

But we do not even have a good definition of "intelligence" itself. As soon as this issue is solved, we might start rethinking the label 'artificial intelligence'.

My proposed "information processing" is also insufficient, as it would make a fancy pocket calculator indistinguishable from what we usually call "AI".

Thinking about that: if we would apply some AI methods, e.g. from the field of machine learning, to perform operations that a pocket calculator already solves (which is kind of ridiculous, because we would be using a computer to train an AI model to mimick a computer) does that make a calculator AI? Or the AI a calculator? What would that make us humans?

[–] Strider@lemmy.world 1 points 1 week ago (1 children)

As a guy working in tech for decades I disagree.

We coined the term wrong. The literal words do not match the technology, as in intelligence.

That 'we' agreed on that llm is ai does sadly not make things better.

Anyhow here we are with neither you nor me being able to leave this hype train.

[–] Iconoclast@feddit.uk 1 points 1 week ago (1 children)

But we don't have agreed upon definition for intelligence either:

  • The ability to acquire, understand, and use knowledge.
  • the ability to learn or understand or to deal with new or trying situations
  • the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
  • the act of understanding
  • the ability to learn, understand, and make judgments or have opinions that are based on reason
  • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

I see AI as a term similar to "plants." When I hear this complaint it sounds to me like someone asking how strawberries and sequoia trees can both be plants when they couldn't be further apart. Well yeah, but that's why we have more specific terms when we're referring to a particular plant - just like with AI. Plants and AI are both parent categories that cover a wide range of subcategories.

[–] Strider@lemmy.world 2 points 1 week ago (1 children)

Respect for you, good sir! A good point well made.

It's just my interpretation or current understanding of intelligence. I think I am adding sentience and motivation accidentially.

So your original point stands.

[–] Iconoclast@feddit.uk 2 points 1 week ago* (last edited 1 week ago)

Thank you.

I think the issue is that when people hear "AI," their minds immediately jump to the sci-fi AI systems depicted as as smart or smarter than humans. They then see the stupid mistakes LLMs make and reasonably conclude these systems are nothing alike, so LLMs don't count as AI in their minds.

However, the AI systems in sci-fi aren't just intelligent - they're generally intelligent. That's what LLMs lack.

The way I see it, there are levels to intelligence. A chess bot is a narrowly intelligent system. It's great at one thing but can't do anything else. Then there's Artificial General Intelligence (AGI), which is basically human-level intelligence. The next step up is Artificial Superintelligence (ASI) - a generally intelligent system that's superhuman across the entire field of intelligence, unlike a chess bot that's only "superhuman" at chess.

I'd say LLMs are somewhere between narrow intelligence and AGI. They can clearly do more than just generate language, but not to the extent humans can, so I wouldn't call them generally intelligent. At least not yet.

And yeah, I don't think sentience necessarily needs to come along for the ride. It might, but it's not obvious to me that one couldn't exist without the other. It's conceivable to imagine a system that's superintelligent but it doesn't feel like anything to be that system.