In defense of people who say LLMs are not intelligent: they probably mean to say they are not sapient, and I think they're loosely correct if you consider the literal word "intelligent" to have a different meaning from the denotative "Intelligence" in the context of Artificial Intelligence.
You Should Know
YSK - for all the things that can make your life easier!
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must begin with YSK.
All posts must begin with YSK. If you're a Mastodon user, then include YSK after @youshouldknow. This is a community to share tips and tricks that will help you improve your life.
Rule 2- Your post body text must include the reason "Why" YSK:
**In your post's text body, you must include the reason "Why" YSK: It’s helpful for readability, and informs readers about the importance of the content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding non-YSK posts.
Provided it is about the community itself, you may post non-YSK posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you harass or discriminate against any individual member, you will be removed.
If you are a member, sympathizer or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people and you were provably vocal about your hate, then you will be banned on sight.
For further explanation, clarification and feedback about this rule, you may follow this link.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- The majority of bots aren't allowed to participate here.
Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.
Rule 11- Posts must actually be true: Disiniformation, trolling, and being misleading will not be tolerated. Repeated or egregious attempts will earn you a ban. This also applies to filing reports: If you continually file false reports YOU WILL BE BANNED! We can see who reports what, and shenanigans will not be tolerated.
If you file a report, include what specific rule is being violated and how.
Partnered Communities:
You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.
Community Moderation
For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.
Credits
Our icon(masterpiece) was made by @clen15!
'Intelligence' requires understanding, understanding requires awareness. This is not seen in anything called "AI", not today at least, but maybe not ever. Again, why not use a different word, one that actually applies to these advanced calculators? Expecting the best out of humanity, it may be because of the appeal of the added pizzazz and the excitement that comes with it or simple semantic confusion... but seeing the people behind it all, it probably is so the dummies get overly excited and buy stuff/make these bots integral parts of their lives. 🤷
The term "Artificial Intelligence" has been around for a long time, 25 years ago AI was an acceptable name for NPC logic in videogames. Arguably that's still the case, and personally I vastly prefer "Artificial Intelligence" to "Broad Simulation Of Common Sense Powered By Von Neumann Machines".
The overuse (and overtrust) of LLMs has made me feel ashamed to reference video game NPCs as AI and I hate it. There was nothing wrong with it. We all understood the ability of the AI to be limited to specific functions. I loved when Forza Horizon introduced "drivatar" AI personalities of actual players, resembling their actual activities. Now it's a vomit term for shady search engines and confused visualizers.
I don't share the feeling. I'll gladly tie a M$ shareholder to a chair, force them to watch me play Perfect Dark, and say "man I love these AI settings, I wish they made AI like they used to".
I remember when “heuristics” were all the rage. Frankly that’s what LLMs are, advanced heuristics. “Intelligence” is nothing more than marketing bingo.
Usually the reason we want people to stop calling LLMs AI is because there has been a giant marketing machine constructed designed to (and successfully) tricking laymen into believing that LLMs are adjacent to and one tiny breakthrough away from becoming AGI.
From another angle, your statement that AI is not a specific term is correct. Why, then, should we keep using it in common parlance when it just serves to confuse laymen? Let's just use the more specific terms.
What they’re not designed to do is give factual answers
or mental health therapy
So... not intelligent. In the sense that when someone without enough knowledge of computers and/or LLMs hears "LLM is intelligent" and sees "an LLM tells me X", they will be likely to believe that X is true, and not without a reason. Exactly this is my main reason against all the use of intelligence-related terms. When spoken by knowledgeable people who do know the difference - yeah, I am all for that. But first we need to cut the crap of advertisement and hype
"Intelligent" is itself a highly unspecific term which covers quite a lot of different things.
What you're think is "reasoning" or "rationalizing", and LLMs can't do that at all.
However what LLMs (and most Machine Learning implementations) can do is "pattern matching" which is also an element of intelligence: it's what gives us and most animals the ability to recognize things such as food or predators without actually thinking about it (you just see, say, a cat, and you know without thinking that it's a cat even though cats don't all look the same), plus in humans it's also what's behind intuition.
PS: Way back since when they were invented over 3 decades ago, Neural Networks and other Machine Learning technologies were already very good at finding patterns in their training data - often better than humans.
The evolution of the technology has added to it the capability of creating content which follows those patterns, giving us things like LLMs or image generation.
However what has been made clear by LLMs is that using patterns alone (plus a little randomness to vary the results) in generating textual content is not enough to create useful content beyond entertainment, and that's exactly because LLMs can't rationalize. However, the original pattern matching stuff without the content generation is still widely used and very successfully so, in things from OCR to image recognition.
And "intelligence" itself isn't very well defined either. So the only word that remains is "artificial", and we can agree on that.
I usually try to avoid the word "AI". I'll say "LLM" if I talk about chatbots, ChatGPT etc. Or I use the term "machine learning" when broadly speaking about the concept of computers learning and doing such things. It's not exactly the same thing, though. But when reading other people's texts I always think of LLMs when they say AI, because that's currently what they mean almost every time. And AGI is more sci-fi as of now, so it needs some disclaimers and context anyway.
In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.
That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.
To add to the confusion, you also have people out there thinking it's "Al" or "A1". It's a real mess.
I can't wait to see what A2 can do!
We've been waiting for that since 1824!
Really? Like the steak sauce? I guess I should have seen that coming since the 00s motorcycle communities keep asking about their F1 light. Fuel 1njection
I still think intelligence is a marketing term or simply a misnomer. It's basically an advanced calculator. Intelligence questions, creates rules from nothing, transforms raw data from reality into ideas, has its own volition... And the same goes for a chess engine, of course, it's just more visible because it's not spitting out text but chess moves. Intelligence and consciousness don't seem to be computational processes.
I could follow everything you said up until the conclusion. If consciousness is not computational, then what is going on in our brains instead? I know of course that even neuroscientists don't know exactly, but just in broad principle. I always thought our brains are still doing computation, just with a different method to computers. I don't mean to be contrarian, I'm just genuenly curious what other kind of process could support consciousness?
I'm not gonna claim to 'know' things here, and I'm too groggy to even attempt to give you a satisfying answer but: applied formal logic as seen in any machine based on logic gates is just an expression/replication of simplified thought and not a copy of our base mental processes. The mind understands truths that cannot even be formalized or derived, such as axiomatic truths. Even if something can be understood and predicted, it doesn't mean the process could be written down in code. It certainly isn't today...
My understanding of the topic is closer to Roger Penrose's postulates so please check this wiki page and maybe watch a couple of vids on the topic, I'm just a peasant with a hunch when it comes to "AI". 🤷
You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.
A calculator doesn’t qualify because it runs "fixed code" with no learning or generalization. There's no flexibility to it. It can't adapt.
Very good explanation. And important distinctions.
There's also a philosophical definition, which I think is hotly contested so depending on your school of thought your belief of is LLM AI can vary. Usually many people take issue with the thought over questions like does it have a mind, think, or have consciousness?
What would you call systems that are used for discovery of new drugs or treatments? For example, companies using "AI" for Parkinson's research.
Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.
Is there a specific name? Or just "non-LLM ML systems"?
They're generally just referred to as "deep learning" or "machine learning". The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
Does that include systems used for "correlation science"? Things like "people that are left-handed and eat sardines are more likely to develop eyebrow cancer". Also genetic correlations for odd things like musical talent?
Edit: in other words, searches that look for correlations in hundreds of thousands of parameters.
The way you describe LLM sounds exactly like a large portion of humans I see.