this post was submitted on 10 Aug 2025
98 points (99.0% liked)

technology

23950 readers
371 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] InappropriateEmote@hexbear.net 10 points 4 weeks ago (2 children)

This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here's another thing.

You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.

An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.

Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.


It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.

[–] semioticbreakdown@hexbear.net 3 points 4 weeks ago (2 children)

But evolution itself doesn't "want" anything. It just has directionality towards making better replicators

But evolution is actively participated in and directed by the unbroken process of life. The need to avoid death is prior to the existence of evolution. It can't be just the result of an imposition on sentient life, because it's a necessary condition of the autopoietic processes that define life itself, of which evolution is an extension. Evolution isn't even defined by making better replicators really. A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible. Population steady-state flows happen in nature quite often. When the dissipative structures that formed proto-life cordoned off from the world through cell boundaries, it really did become a need to avoid death to continue. it really is a kind of want, not just its appearance (but not mentally because there is no mind yet) - to maintain tension between the world and itself and propagate itself.

It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains

I don't think it's as much from the neurons themselves as it is the whole inference/action dialectic and the world/organism dialectic. All mental phenomena is secondary to and originally in service of acting on the world to maintain the boundary between the organism and the world, and is necessarily indistinguishable from making judgements of and causal/predictive inference on the world itself. Self-awareness resulted from real material pressures, actually existing relations between organisms, and the need to distinguish the self and the other for appropriate action. I'd also argue that the genuine desire to survive as a psychic phenomenon has always existed at least from the first time a neural organism perceived the world, identical to qualia. It's not necessary to have self-awareness for that. Want as a mental phenomena exists prior to self-awareness - the latter results from the attribution of causes to the body. The model of the world experienced and embodied by an artificial sentience doesn't need to distinguish itself in the immediate until doing so is necessary for its continued existence and further certainty on the attribution of causes.

[–] purpleworm@hexbear.net 4 points 4 weeks ago (1 children)

Evolution isn't even defined by making better replicators really. A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible.

That's called a bad replicator for the purpose of this discussion, because destroying the conditions that are required for its own replication to continue is not conducive to replication and therefore a replicator that does that is bad.

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago (1 children)

Yes but I am saying that that is a fundamental contradiction that means theyre not just replicators, because their fitness as a replicator

... that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment

may be at odds with the continuation of the species as a process because of the interaction between replication and the environment.

While you can then make the argument that because this isn't conducive to replication in the long term theyre less fit replicators, I think that for a given environment, if the one less effective at replication is the one that continues, then they're categorically speaking really something else, and only focusing on replication is missing a large portion of the picture. I'm also saying that replication isn't essential to the self-maintaining process on the individual level, its just the means of the continuation of that process beyond an individual instance. I understand what you mean, but I don't think they should be treated as replicators because I don't think replication is the fundamental driving force of these processes.

[–] purpleworm@hexbear.net 3 points 4 weeks ago (1 children)

If good replicator is just being defined as personally producing a whole bunch of offspring, then I think it's just not a helpful term. A good replicator should be something that replicates effectively, not just a lot, and what you are describing as "less effective at replication" is clearly more effective at replication relatively speaking if its offspring are still around and its competitors are not. You would hardly say something is a good replicator if it produced an unfathomable amount of offspring and then just ate them all, right?

I'm also saying that replication isn't essential to the self-maintaining process on the individual level

How is this relevant? No one was contradicting this idea, even implicitly, it's just not a meaningful factor in the discussion for the reason you go on to note.

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago (1 children)

spoiler

If good replicator is just being defined as personally producing a whole bunch of offspring, then I think it's just not a helpful term.

They phrased it by less and more offspring and I quoted it as such. If you want to argue that's not what they said and take beef with my post as a result that's fine. If you want to define a good replicator as one that can continue replicating over time, thats OK too (and I would agree with that) but again I think it's attributional and not essential.

How is this relevant? No one was contradicting this idea, even implicitly, it's just not a meaningful factor in the discussion for the reason you go on to note.

But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive?

This does seem to imply replication as the fundamental function of an autopoietic process, at least to me, and that's what I was referencing. All I was trying to get at is that the appearance of "wanting" to survive, as the original poster put it, isn't related to replication, and the attribution of the desire to live as something imposed by and the result of evolution is inaccurate because it's a direct extension of autopoiesis essential to the organism which exists prior to evolutionary (and replicatory) processes. I think this has direct implications for the development of real intelligence in an AI system. I'm not going to reply after this because I don't think I'm explaining my perspective well and I don't want to argue anymore. It's just a quibble on ontology, anyway, because I mostly agree with their post and I thought it was well written and thought out.

[–] purpleworm@hexbear.net 2 points 4 weeks ago (1 children)

This does seem to imply replication as the fundamental function of an autopoietic process, at least to me, and that's what I was referencing

Maybe I'm just reading it wrong, but it looks to me like it's all about how selection pressures produce traits seen in individuals because them having those traits is better for the survival of the species.

All I was trying to get at is that the appearance of "wanting" to survive, as the original poster put it, isn't related to replication, and the attribution of the desire to live as something imposed by and the result of evolution is inaccurate because it's a direct extension of autopoiesis essential to the organism which exists prior to evolutionary (and replicatory) processes.

I don't think amoeba "want" to live, they just do things toward the end of surviving to replicate, with no awareness of anything. It's like machine learning, it's just a system of reactions that ended up being self-perpetuating via survival and reproduction. That's the essential element, and having any sort of "will" is far, far downstream of that.

Wanting to live is caused by replication because it was developed out of these systems in response to selection pressures.

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago (1 children)

spoiler

I don't think amoeba "want" to live, they just do things toward the end of surviving to replicate

Yeah, that distinction is probably where we're differing I think. To me the end is surviving and replication is a means for that. But no that "want" isn't any sort of "will".

although you could point to the sensory and navigational abilities of predatory amoeba as a kind of awareness and therefore a "consciousness" in some way, or go down the panpsychic rabbithole, but I don't really believe in either of those things. Its just protosemiosis versus semiosis or model-free versus model-based to me. Conscious experience as bayesian belief dynamics and predictive inference for perception and action or whatever

[–] purpleworm@hexbear.net 2 points 4 weeks ago (1 children)

I'm a total philistine, half of the words you said just passed over my head, I just don't see anything fundamentally different between amoebas and an electronic light sensor or a roomba or whatever. Certain inputs produce certain outputs, and things like whether it's chemical or mechanical or anything else is immaterial. You may as well tell me that every massive object "wants" to move toward other massive objects in proportion to the product of their masses and inversely proportional to the distance between them. The fact that one perpetuates an organism's existence and the other isn't is purely incidental and I think you're effectively projecting a teleology onto it by saying that these reactions by means of which an organism maintains itself are, by that very fact, evidence of a "want."

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago (1 children)

spoiler

I'm a total philistine, half of the words you said just passed over my head

Dork gibberish, not important. Although you should look it up if you're ever bored, it's interesting stuff

I just don't see anything fundamentally different between amoebas and an electronic light sensor or a roomba or whatever

The fact that one perpetuates an organism's existence and the other isn't is purely incidental

I think that's kind of the central difference to me at least. It's the self maintenance of the tension between the internal and external by reproduction of its component parts that do kinda be what distinguishes life and organisms from mechanistic objects. as you say, whether it's chemical or mechanical or electrical doesn't really matter.

I think you're effectively projecting a teleology onto it by saying that these reactions by means of which an organism maintains itself are, by that very fact, evidence of a "want."

Uhhhh well kinda yeah. (Not a cognitive "want" btw, but it is sublated into the cognitive want. Like the OP puts it, the directionality that forms the preawareness "want", that is later identified with the experiential want post-awareness. But even further back before that, and even prior to the evolutionary process.) It's the circle that presupposes its own end like Hegel talks about. It's self-referential. Some authors say that's not teleological and some do, I'd say it's teleological, but that's just me. On some level you can say that it's just reactive processes but then at the level of operation of the whole system it does gain real significance beyond its appearance as such due to its autonomy and closure from the surrounding world. There's a direct throughline between this "want" on the level of a single cell, as in the constant bringing forth of what it lacks for its own continuation, and things like the special process of reproduction, cognition, experience, self-awareness, and social processes. And even lower below the organism, dissipative structures. Systems constantly implementing themselves as their ends at every level of self-organization. It's just... dialectics. I'm probably not giving a good account of this perspective because I'm not much of a writer or communicator, and I've probably misrepresented some things. You could say I'm just splitting hairs about semantics. I guess that's true too.

[–] purpleworm@hexbear.net 1 points 4 weeks ago (1 children)

I think that's kind of the central difference to me at least. It's the self maintenance of the tension between the internal and external by reproduction of its component parts that do kinda be what distinguishes life and organisms from mechanistic objects. as you say, whether it's chemical or mechanical or electrical doesn't really matter.

But that's a statement about the ultimate consequences of what happens, which doesn't tell us almost anything about the proximate nature of the actions. That's what I mean by "projecting a teleology." Consider a mutant amoeba that does something not conducive to self-maintenance, there is nothing inherently different about the nature of those actions as biological processes, it requires a zoomed-out view to explain normal amoebas as conforming to selection pressures and this mutated behavior as deviating from selection pressures. "But the amoeba dies!" Yes, but the event of the death later on is not useful for explaining the fundamental nature of the action itself, the death is a distant, emergent consequence of the action, an event distinct from the action (as is successful replication, and even more so with the "event" of surviving past that point in the future). You're using teleological reasoning to make some sort of metaphysical claim about events and organisms that fundamentally don't make sense from a materialist perspective.

There's a direct throughline between . . . the constant bringing forth of what it lacks for its own continuation, and things like the special process of reproduction, cognition, experience, self-awareness, and social processes.

This, as I have abridged it, is completely true. The issue is that the "want" is just a metaphysical, teleological complication that doesn't help us understand anything and just serves to mystify a mechanical/chemical/electrical process that is already entirely understandable.

It's just... dialectics

Right, it is dialectics. The problem is that it's Hegelian dialectics, which is highly teleological and idealist, and not material dialectics.

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago (1 children)

spoiler

The problem is that it's Hegelian dialectics, which is highly teleological and idealist, and not material dialectics.

it's not Hegelian dialectics just because I quoted Hegel, but you keep misrepresenting what I have to say so I'm not going to keep effortposting on this, which I should have done when I said I would earlier. You can read authors on autopoietic theory and systems thinking like Varela or Friston if you want. It's a completely material dialectical analysis and not "teleological" in the way you're referring to. It's just self-referential.

[–] purpleworm@hexbear.net 1 points 4 weeks ago

You're conflating ultimate causes and proximate causes. It's just an extremely elementary mistake in understanding behavior. I didn't call it Hegelian on the basis that you mentioning Hegel was evidence (though that did help me make the connection), I called it Hegelian because it has the same ethos of the end existing inside each step of the process, drawing the process along intrinsically, which you cannot claim this theory of "want" is not.

It's no different than saying massive bodies want to be near each other (prioritized in terms of mass1 x mass2 / distance) because they keep exerting force that trends toward that outcome. It's no different than saying that liquids "want" to hold together, they just don't want it very strongly, or that rivers "want" to erode shorelines. With base organisms, it's just input -> output, and the function processing them was created by selection pressures, but there is nothing distinguishing the actual actions on a proximate level from ones that are ultimately self-destructive, because the self-destruction only happens later and on that basis, with no further information being required, cannot be used for establishing what was going on inside the base organism at a proximate level to cause the output.

[–] InappropriateEmote@hexbear.net 3 points 4 weeks ago (1 children)

But evolution is actively participated in and directed by the unbroken process of life.

Yes. And?

The need to avoid death is prior to the existence of evolution. It can't be just the result of an imposition on sentient life, because it's a necessary condition of the autopoietic processes that define life itself, of which evolution is an extension.

I'm not seeing how this contradicts anything I said. In fact it supports what I said by recognizing the necessity for a directionality that precedes (and is a prerequisite for) any kind of sentient desire or "wants."

A replicator that is too effective at replicating can dissolve its environment and destroy the conditions that made its existence possible.

@purpleworm@hexbear.net addressed this really well and gave a thoughtful, completely correct response. Not much more for me to say on it.

When the dissipative structures that formed proto-life cordoned off from the world through cell boundaries, it really did become a need to avoid death to continue. it really is a kind of want, not just its appearance (but not mentally because there is no mind yet) - to maintain tension between the world and itself and propagate itself.

I think you're splitting hairs here between ever so slightly different aspects what I have been calling directionality. Desires or "wants" by definition require a mind capable of having a want or desire. Where you say "it really is a kind of want but not mentally because there is no mind yet" then that's simply not the kind of "want" we are talking about here, the thing that a self-aware (mind-possessing) AI would have if it were genuinely self aware and possessing of a mind. Everything else really is just an appearance of want and is a result of what I've been calling directionality. What you're talking about as the mindless "need to avoid death to continue" is still just the mindless non-intelligent and non-sentient directionality of evolution. And to specifically address this piece:

to maintain tension between the world and itself and propagate itself.

But it is part of the world (dialectics ftw!). There is a tension between inside and outside the individual cell (and also a tension between the "self" and "outside the self" of a sentient mind which is addressed further down, but this is not the same thing as the the tension between the cell and the world, as proven by the fact we aren't aware of all our cells and frequently kill them by doing such things as scratching) but the cell still isn't the most basic unit of replication in evolution, that would be the gene. Strands of RNA or DNA. Genes (often but not always) use cells as part of the vehicle for their replication, and either way they are still just chemicals reacting with the environment they exist within. There's no more intentionality behind what they do than there is behind, say, a magnet clinging to a fridge. That magnet does not "want" to cling to your fridge, like genes, it is reacting to it's environment and this will be true regardless of where you draw the boundary between the "self" of the magnet and "the outside world." To actually desire something the way we are talking about here requires the complexity of a brain capable of producing a mind.

I don't think it's as much from the neurons themselves as it is the whole inference/action dialectic and the world/organism dialectic. [...] Self-awareness resulted from real material pressures, actually existing relations between organisms, and the need to distinguish the self and the other for appropriate action

Agreed. The emergent property of the mind and sentience comes out of the complexity of the interaction of the firing of neurons in a brain and the world they exist within, at least in all likelhood. We still don't know exactly what produces our ability to experience, where exactly qualia originate (i.e. why we aren't just philosophical zombies) but I think most neuroscientists (and philosophers who work on this stuff) would agree, as I do too, that without an outside non-self world for those neurons to interact with, there would be no actual mind. Even that the mind is a drawing of the distinction between self and non-self. But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase "and with the outside world" when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.

I'd also argue that the genuine desire to survive as a psychic phenomenon has always existed at least from the first time a neural organism perceived the world, identical to qualia.

Um, yeah... that's pretty much what my argument was for the necessity of any genuine AI to have wants and desires, those "wants" necessarily would have had to have been there built in for it to even become AI.

It's not necessary to have self-awareness for that. Want as a mental phenomena exists prior to self-awareness

Disagree. Again, if you want to split hairs on exactly where it is in that ladder of complexity that self-awareness arises, or where in the fuzzy chain we can draw a line between organisms capable of self-awareness vs those not, or even exactly what constitutes self-awareness then feel free. But a thing having an actual desire as something genuinely experienced, it requires some sense of selfhood for that experience to happen to.

[–] semioticbreakdown@hexbear.net 1 points 4 weeks ago

But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase "and with the outside world" when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.

You'd think, but damn the techbros really have forgotten about this one tbh. I think it's still very relevant to the topic of AI honestly cause the people who make them keep ignoring that fact. I've seen emergent complexity get thrown around as justification for LLM sentience in some circles. And I don't understand why when nearly everything in neuroscience and philosophy, as you said, contradicts that. Very frustrating, frankly. Even the term "world model" gets thrown around with LLMs too and its soooo aggravating.

I agree with your thoughts on directionality i was just quibbling on evolution and yeah splitting hairs really. Like I have other thoughts on world models and sentience and selfhood but theyre probably pretty fringe so I'm not going to share them here.

[–] yogthos@lemmygrad.ml 3 points 4 weeks ago (1 children)

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia. A system can be self aware, and develop needs without necessarily being conscious and having an internal experience. What needs and wants really boil down to is that the system is trying to maintain a particular state. To maintain homeostasis, the system needs to react to external inputs and take actions that keep it in the desired state. For example, a thermostat could be said to have a "need" to maintain a particular temperature, but it could hardly be argued that it has some sort of qualia.

Why sentience exists is a really interesting question all of itself in my opinion as it's not an obviously necessary quality within a self aware system. I suspect it may be related to having a theory of mind. When a system starts to model itself then perhaps you end up with some sort of a resonance where it thinks about its own thoughts and that's what creates internal experience.

We also have to define what we mean by intelligence here. My definition would be a system that has a model of a particular domain, and is able to make judgments regarding outcomes of different actions. I don't think mere intelligence requires self awareness or consciousness.

[–] Philosoraptor@hexbear.net 3 points 4 weeks ago

I'd argue that the idea of self awareness or needs/wants is tangential to the notion of qualia.

This is right. Having things like beliefs and desires is called "intentionality," and is orthogonal to both sentience/sapience and first-person subjectivity (qualia). You can have beliefs and desires without any accompanying qualitative experience and vice versa.