We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Exactly. People see “AI” and think LLMs and diffusion models. Those are both probabilistic translation engines. They’re no more intelligent than an AC/DC converter, just a lot more complex.
However, there are neural networks and sense arrays in the field of AI, and those are designed to replicate the process of thought.
The real route to a thinking AI is likely a combination of the two, where a neural network can call on expert systems including translation engines to do the heavy lifting and then run a more nuanced decision tree over the results.
Thing is, modern LLMs and diffusion models are already more complex than a single human mind can fully comprehend, so we default to internally labelling them as either “like us” or “magic”, even when we theoretically know them to be nothing but really deep predictive models.
The problem is in the definition of intelligence.
To me, intelligence is simply problem-solving ability. It does not necessarily imply consciousness, having self-awareness or anything like that. A simple calculator is already displaying intelligence, even if limited to a very narrow situational set of problems, in the sense that it can resolve mathematical questions.
That doesn't mean the calculator is self aware.. it just means it can resolve problems. Biological systems can also resolve problems without necessarily being aware of what they are doing.. does the fungus actually knows it's solving a maze the scientists prepared for it when it just expands following what is preprogrammed by its biological instincts determined by natural selection? Do the ants really know what they are doing when they find the shortest path just by instinctively following a scent of pheromones left by other ants?
Knowing exactly what causes consciousness is an entirely different problem.. and it's one that has not been resolved by any scientist or philosopher in a satisfactory manner. So we simply do not know that.
Seems to me your definition of intelligence ignores whole aspects of true intelligence, at least of the human kind, such as emotional intelligence and social intelligence and artistic intelligence and moral intelligence...
"Problem solving" is the name for what you described and it doesn't necessarily require intelligence. In fact most intelligent people have encountered situations where it made solving a problem more difficult.
Yes there there as many types of intelligence as there are types of problems. Emotional intelligence deals with emotional problems, social intelligence deals with social problems. This doesn't conflict with my definition, it's still problem solving.
Just because a being is intelligent does not mean it can solve all the problems of all kinds, it would require general intelligence, and even a generally intelligent being needs the right training... if you are trained wrong or trained for a different kind of problem that does not fit the current one then your current experience might actually get in the way, as you point out.
Slime mold can solve mazes.
Yes, that's what I meant 2 comments above by "fungus" (though to be fair, whether slime molds are fungi depends on your definition, they used to be classified as one, before "protist kingdom" was made up to mix protozoa, algae & molds, but I keep preferring the traditional autotroph / absorptive heterotroph / digestive heterotroph division).
I also mentioned ants who can find the optimal path by simply following scents left by other ants without understanding how this helps with that.
You can be intelligent without being aware of your intelligence, or you can be stupid without being aware of your stupidity... like how humans are actually creating problems for themselves in many cases.
Intelligence != awareness
If your definition of intelligence doesn't include awareness it's not very useful.
I don't know, I feel it's actually the opposite. Awareness is something you can only experience subjectively, it's "qualia", a quality that you cannot measure outside of yourself or detect externally. There's a reason IQ ("intelligence" quotient) tests use puzzles/problems and don't test conscious awareness. Most of the time in science intelligence is defined as problem solving and capacity to adapt/extrapolate because that definition makes it observable and more scientifically useful.
If it were to include awareness then we can't in good faith answer the question: "is it intelligent?" ..we can only say we don't know. This is the main struggle of philosophy of the mind, what is often called "the hard problem of consciousness". Empirical analysis would not show if something is having (or not) the conscious experience of being aware.
Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.
There's a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don't have minds and aren't aware and can't think. That's a problem.
Why is it a problem?
Generally, I'd say having clear, specific and useful definitions is a good thing to help communicate and understand what we are talking about and avoid misinterpretations.
What is the reason you think philosophy of the mind exists as a field of study?
In part, so we don't assign intelligence to mindless, unaware, unthinking things like slime mold - it's so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.
What you're doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about "intelligence" and only serves the interests of the companies for generating hype and attracting investor cash.
There are many philosophers of the mind that agree that intelligence and consciousness are separate things.
Some examples are Daniel Dennett and John Searle.
There are also currents of thought in philosophy of the mind that disagree that even things like "slime mold" are mindless. Both from the materialist direction (like panpsychysm) and from the idealist direction (Bernardo Kastrup's idealism).
Most philosophers of the mind would disagree that the reason for their field to exist really has anything to do with any specific terminology / position. I'd say it has more to do with curiosity and the interest for seeking truth. Like most fields of philosophy do.
I'd argue it's your definition, which includes consciousness, what makes AI an attractive term for investors. Precisely because you say intelligence include awareness and it can lead to people to misinterpret AI as self-aware.
Promoting your definition helps the interests of the companies who want to generate hype, and causes just as much confusion as you attribute to mine in that regard.
At least mine is simpler and makes it easier to invalidate the hype, since if intelligence isn't awareness then AI isn't awareness. Many philosophers have agreed with that, for years, before LLMs were a thing. John Searle for example is famous for the Chinese room experiment.