this post was submitted on 29 Jan 2024
437 points (84.8% liked)

Ask Lemmy

27241 readers
1570 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

you are viewing a single comment's thread
view the rest of the comments
[–] PrinceWith999Enemies@lemmy.world 51 points 10 months ago (4 children)

I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

[–] NABDad@lemmy.world 16 points 10 months ago (1 children)

My AI professor back in the early 90's made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

I think that's always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don't figure we've created AI, just that we solved that problem so it doesn't seem as big a deal anymore.

LLMs got hyped up, but I still think there's a good chance they will just be a thing we use, and the AI goal posts will move again.

[–] Nemo@midwest.social 8 points 10 months ago

I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

[–] Rikj000@discuss.tchncs.de 3 points 10 months ago (3 children)

But what do you call a robot that teaches itself how to walk

In it's current state,
I'd call it ML (Machine Learning)

A human defines the desired outcome,
and the technology "learns itself" to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

[–] Blueberrydreamer@lemmynsfw.com 4 points 10 months ago (1 children)

That definition would also apply to teaching a baby to walk.

[–] rambaroo@lemmy.world 4 points 10 months ago (1 children)

A baby isn't just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn't mean it's ever been used correctly.

It's actually stunning to me that people are so hyped on LLM bullshit that they're trying to argue it comes anywhere close to a sentient being.

[–] Blueberrydreamer@lemmynsfw.com -1 points 10 months ago

You completely missed my point obviously. I'm trying to get you to consider what "intelligence" actually means. Is intelligence the ability to learn? Make decisions? Have feelings? Outside of humans, what else possesses your definition of intelligence? Parrots? Mice? Spiders?

I'm not comparing LLMs to human complexity, nor do I particularly give a shit about them in my daily life. I'm just trying to get you to actually examine your definition of intelligence, as you seem to use something specific that most of our society doesn't.

[–] 0ops@lemm.ee 1 points 10 months ago

To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn't get to practice kicking Mom in the liver for 9 months - we take it for granted, but that's a transferable skill.

Granted, this is not exactly analogous to how a neural network is trained, but I don't think it's wise to assume that there's something "magic" in us like a "soul", when the difference between biological and digital neural networks could be explained by our "richer" ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware

[–] PrinceWith999Enemies@lemmy.world -1 points 10 months ago

So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

[–] Pipoca@lemmy.world 2 points 10 months ago

Exactly.

AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

It's been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

[–] Fedizen@lemmy.world 1 points 10 months ago* (last edited 10 months ago) (2 children)

on the other hand calculators can do things more quickly than humans, this doesn't mean they're intelligent or even on the intelligence spectrum. They take an input and provide and output.

The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like "algorithms" to "AI" as its not making a "decision". Its making a calculation, its just making it very fast based on a model and is prompt driven.

Actual intelligence doesn't just shut off the moment its prompted response ends - it keeps going.

[–] PrinceWith999Enemies@lemmy.world 1 points 10 months ago (1 children)

I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

[–] Fedizen@lemmy.world 0 points 10 months ago (1 children)

What I'm saying is current computer "AI" isn't on the spectrum of intelligence while a dog or grasshopper is.

[–] PrinceWith999Enemies@lemmy.world 1 points 10 months ago (1 children)

Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

[–] Fedizen@lemmy.world 1 points 10 months ago (1 children)

It's the 'why'. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I'd argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

Everything we call "AI" now should be called "EI" or "extended intelligence" because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

[–] PrinceWith999Enemies@lemmy.world 0 points 10 months ago

Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

[–] 0ops@lemm.ee 1 points 10 months ago

I personally wouldn't consider a neutral network an algorithm, as chance is a huge factor: whether you're training or evaluating you'll never get quite the same results