this post was submitted on 10 Aug 2025
99 points (99.0% liked)

technology

24027 readers
196 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 27 points 1 month ago (3 children)

I don't think an AI necessarily has to have needs or wants, but it does need to have a world model. That's the shared context we all have and what informs our use of language. We don't just string tokens together when we think. We have a model of the world around us in our heads, and we reason about the world by simulating actions and outcomes within our internal world model. I suspect that the path to actual thinking machines will be through embodiment. Robots that interact with the world, and learn to model it will be able to reason about it in a meaningful sense.

[–] InappropriateEmote@hexbear.net 10 points 1 month ago (2 children)

This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here's another thing.

You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.

An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.

Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.


It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.

[–] yogthos@lemmygrad.ml 3 points 1 month ago (1 children)

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia. A system can be self aware, and develop needs without necessarily being conscious and having an internal experience. What needs and wants really boil down to is that the system is trying to maintain a particular state. To maintain homeostasis, the system needs to react to external inputs and take actions that keep it in the desired state. For example, a thermostat could be said to have a "need" to maintain a particular temperature, but it could hardly be argued that it has some sort of qualia.

Why sentience exists is a really interesting question all of itself in my opinion as it's not an obviously necessary quality within a self aware system. I suspect it may be related to having a theory of mind. When a system starts to model itself then perhaps you end up with some sort of a resonance where it thinks about its own thoughts and that's what creates internal experience.

We also have to define what we mean by intelligence here. My definition would be a system that has a model of a particular domain, and is able to make judgments regarding outcomes of different actions. I don't think mere intelligence requires self awareness or consciousness.

[–] Philosoraptor@hexbear.net 3 points 1 month ago

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia.

This is right. Having things like beliefs and desires is called "intentionality," and is orthogonal to both sentience/sapience and first-person subjectivity (qualia). You can have beliefs and desires without any accompanying qualitative experience and vice versa.

[–] Awoo@hexbear.net 4 points 1 month ago (2 children)

Machine learning requires needs or wants in order to evolve. If your model is going to learn how to utilise energy efficiently between recharging then it needs to desire energy (need/want). This is just "eat" and "collect water" process of learning. Then you give it predators to learn how to avoid being killed in the process of doing this so it learns survival methods. Add complexity to the environment over time and it'll learn more and more and more.

Reproduction probably needs some sort of social cues to learn, the ability to communicate with other models that they wish to reproduce, or the ability to start working in teams.

It all has the requirement of needs/wants. The basis of all animal intelligence evolving into more efficient methods of doing something is having needs.

[–] purpleworm@hexbear.net 3 points 1 month ago (1 children)

I think "adding predators" is kind of cargo cult, and you can just give it obstacles that are actually relevant to problems it could conceivably need to solve, because we aren't making AI with the main hope that they can successfully survive a wolf attack, and it doesn't need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don't want to use it for those conditions.

[–] Awoo@hexbear.net 1 points 1 month ago (1 children)

It's going to have a completely different development path without any kind of predators. Almost everything in our world is either prey, predator, or both.

I struggle to believe that it will be adapted well for a human reality where predation exists (where humans are sometimes predator to other humans too, socially) without having a development path that adapts it for that reality.

it doesn't need to just completely recapitulate what humans did to evolve when it is fundamentally not an early human and we don't want to use it for those conditions.

We don't know how to skip straight to human level intelligence. You need to create wild animal level artificial intelligence before we can understand how to improve it into human level intelligence. If we can't make artificial monkeys or macaws in terms of emotional/social intelligence and problem-solving then we certainly can't make anything akin to humans or whatever the next level intelligence is that we're hoping to get when an AI can make better versions of itself iteratively progressing technology to levels we haven't imagined yet.

[–] purpleworm@hexbear.net 2 points 1 month ago (1 children)

My point is that it's a different kind of thing. Role-playing that it's an animal in the wild instead of a cogitating tool is counterproductive. We aren't going to be sending it out into the jungle to survive there, we want it to deal with human situations, helping people, with its own "survival" being instrumental to that end. Even if it encounters a hostile human, it probably won't have anything that it can do about it because we aren't immediately talking about building androids, this AI will effectively be just a big bundle of computers in a warehouse. If you want to give an experimental AI control over the security systems of the building housing it, go off I guess, but the protocols to contain the person breaking in by . . . locking the doors with nothing else it can really do while calling the cops and the owner of the facility are not really "avoiding predators" on a level other than strained metaphor, and require no engagement in this "surviving in the wild" role-play, it's just identifying a threat and then doing something that, the threat being identified, any modern computer can do. If you want to say it needs to learn to recognize "threats" (be they some abstraction in a game simulation, a fire, a robber, or a plane falling out of the sky) sure, that's fair, that falls within obstacles it might actually encounter.

Nothing I'm saying bears on the level of intelligence it exhibits or that it is capable of. I'm not saying it needs to handle these things as well as a human would, just that it needs to be given applicable situations.

[–] Awoo@hexbear.net 1 points 1 month ago (1 children)

I feel like you've misunderstood me. You're talking consistently about one single AI.

Machine learning is not one AI. It is thousands of generations of AI that have iteratively improved over time by their successes and failures. The best performing of the generation go on to form the basis of the next generation, or you have survival mechanics that automatically form new generations.

This isn't training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.

If your process isn't going to be similar to humans, you aren't going to produce something similar to humans. I honestly think that's dangerous in and of itself, you're creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.

[–] purpleworm@hexbear.net 1 points 1 month ago

This isn't training a model by giving something input data. Entire neural networks we do not understand are formed through an attempt at creating an artificial natural selection.

If your process isn't going to be similar to humans, you aren't going to produce something similar to humans. I honestly think that's dangerous in and of itself, you're creating something that might have a brain network that is fundamentally at odds with coexistence with humanity.

But you're able to designate what its goals are completely arbitrarily. It doesn't need to think like a human -- there are humans who have been at odds with coexistence with humanity -- it needs to be constructed based on the value of human benefit, and you can seriously just tell it that. That isn't changed by it cogitating in a structurally different way, which it also almost certainly would be doing anyway because the way we do is highly adapted to early humanity, but is structurally deeply based on random incidents of mutation before then. Something could think very differently and nonetheless be just as capable of flourishing in those circumstances. This difference is compounded by the fact that you probably aren't going to actually produce an accurate simulation of an early human environment because you can't just make a functional simulation of macroscopic reality like that. Even imagining your method made sense, it would still ultimately need to fall into aspects of what I'm saying about arbitrary stipulation because the model environment would be based on human heuristics.

But way more important than that is the part where you, again, can just tell it that human benefit based on human instructions is the primary goal and it will pursue that, handling things like energy acquisition and efficiency secondarily.

[–] yogthos@lemmygrad.ml 1 points 1 month ago (1 children)

It requires needs or wants to be self directed. The needs can also be externalized the way we do with LLMs today. The user prompt can generate a goal for the system, and then it will work to accomplish it. That said, I entirely agree systems that are self directed are more interesting. If a system has needs that result in it wanting to maintain homeostasis, such as maintaining an optimal energy level, then it can act and learn autonomously.

[–] Awoo@hexbear.net 2 points 1 month ago* (last edited 1 month ago) (1 children)

The user prompt can generate a goal for the system, and then it will work to accomplish it.

Ok but how is it getting intelligent before the user prompt?

The AI isn't useful until it is grown and evolved. I'm talking about the earlier stages.

[–] yogthos@lemmygrad.ml 1 points 1 month ago

We can look at examples of video generating models. I'd argue they have to have a meaningful and persistent representation of the world internally. Consider something like Genie as an example https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/

It doesn't have volition, but it does have intelligence in the domain of creating consistent simulations. So, it does seem like you can get a domain specific intelligence through reinforcement training.