this post was submitted on 10 Aug 2025
99 points (99.0% liked)

technology

24002 readers
234 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Awoo@hexbear.net 33 points 1 month ago (41 children)

LLM has reached its limits. No matter what you do with it the thing is always going to be a glorified search engine.

AI has to be conceived from the ground up as something that learns and reproduces actual thinking based on needs/wants. A system that produces methods of walking that reduce energy use for a bot while also seeking out energy sources might only be reproducing the cognitive behaviour of a bacteria but it is closer to life than these LLMs and has more potential to iteratively evolve into something more complex as you give it more wants/needs for its program to evolve on.

Machine learning has more potential than this shit.

[–] yogthos@lemmygrad.ml 27 points 1 month ago (37 children)

I don't think an AI necessarily has to have needs or wants, but it does need to have a world model. That's the shared context we all have and what informs our use of language. We don't just string tokens together when we think. We have a model of the world around us in our heads, and we reason about the world by simulating actions and outcomes within our internal world model. I suspect that the path to actual thinking machines will be through embodiment. Robots that interact with the world, and learn to model it will be able to reason about it in a meaningful sense.

[–] InappropriateEmote@hexbear.net 10 points 1 month ago (2 children)

This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here's another thing.

You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.

An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.

Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.


It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.

[–] yogthos@lemmygrad.ml 3 points 1 month ago (1 children)

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia. A system can be self aware, and develop needs without necessarily being conscious and having an internal experience. What needs and wants really boil down to is that the system is trying to maintain a particular state. To maintain homeostasis, the system needs to react to external inputs and take actions that keep it in the desired state. For example, a thermostat could be said to have a "need" to maintain a particular temperature, but it could hardly be argued that it has some sort of qualia.

Why sentience exists is a really interesting question all of itself in my opinion as it's not an obviously necessary quality within a self aware system. I suspect it may be related to having a theory of mind. When a system starts to model itself then perhaps you end up with some sort of a resonance where it thinks about its own thoughts and that's what creates internal experience.

We also have to define what we mean by intelligence here. My definition would be a system that has a model of a particular domain, and is able to make judgments regarding outcomes of different actions. I don't think mere intelligence requires self awareness or consciousness.

[–] Philosoraptor@hexbear.net 3 points 1 month ago

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia.

This is right. Having things like beliefs and desires is called "intentionality," and is orthogonal to both sentience/sapience and first-person subjectivity (qualia). You can have beliefs and desires without any accompanying qualitative experience and vice versa.

load more comments (34 replies)
load more comments (37 replies)