this post was submitted on 10 Aug 2025
100 points (99.0% liked)
technology
24047 readers
305 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules:
- 1. Obviously abide by the sitewide code of conduct.
Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
That's called a bad replicator for the purpose of this discussion, because destroying the conditions that are required for its own replication to continue is not conducive to replication and therefore a replicator that does that is bad.
If good replicator is just being defined as personally producing a whole bunch of offspring, then I think it's just not a helpful term. A good replicator should be something that replicates effectively, not just a lot, and what you are describing as "less effective at replication" is clearly more effective at replication relatively speaking if its offspring are still around and its competitors are not. You would hardly say something is a good replicator if it produced an unfathomable amount of offspring and then just ate them all, right?
How is this relevant? No one was contradicting this idea, even implicitly, it's just not a meaningful factor in the discussion for the reason you go on to note.
Maybe I'm just reading it wrong, but it looks to me like it's all about how selection pressures produce traits seen in individuals because them having those traits is better for the survival of the species.
I don't think amoeba "want" to live, they just do things toward the end of surviving to replicate, with no awareness of anything. It's like machine learning, it's just a system of reactions that ended up being self-perpetuating via survival and reproduction. That's the essential element, and having any sort of "will" is far, far downstream of that.
Wanting to live is caused by replication because it was developed out of these systems in response to selection pressures.
I'm a total philistine, half of the words you said just passed over my head, I just don't see anything fundamentally different between amoebas and an electronic light sensor or a roomba or whatever. Certain inputs produce certain outputs, and things like whether it's chemical or mechanical or anything else is immaterial. You may as well tell me that every massive object "wants" to move toward other massive objects in proportion to the product of their masses and inversely proportional to the distance between them. The fact that one perpetuates an organism's existence and the other isn't is purely incidental and I think you're effectively projecting a teleology onto it by saying that these reactions by means of which an organism maintains itself are, by that very fact, evidence of a "want."
But that's a statement about the ultimate consequences of what happens, which doesn't tell us almost anything about the proximate nature of the actions. That's what I mean by "projecting a teleology." Consider a mutant amoeba that does something not conducive to self-maintenance, there is nothing inherently different about the nature of those actions as biological processes, it requires a zoomed-out view to explain normal amoebas as conforming to selection pressures and this mutated behavior as deviating from selection pressures. "But the amoeba dies!" Yes, but the event of the death later on is not useful for explaining the fundamental nature of the action itself, the death is a distant, emergent consequence of the action, an event distinct from the action (as is successful replication, and even more so with the "event" of surviving past that point in the future). You're using teleological reasoning to make some sort of metaphysical claim about events and organisms that fundamentally don't make sense from a materialist perspective.
This, as I have abridged it, is completely true. The issue is that the "want" is just a metaphysical, teleological complication that doesn't help us understand anything and just serves to mystify a mechanical/chemical/electrical process that is already entirely understandable.
Right, it is dialectics. The problem is that it's Hegelian dialectics, which is highly teleological and idealist, and not material dialectics.
You're conflating ultimate causes and proximate causes. It's just an extremely elementary mistake in understanding behavior. I didn't call it Hegelian on the basis that you mentioning Hegel was evidence (though that did help me make the connection), I called it Hegelian because it has the same ethos of the end existing inside each step of the process, drawing the process along intrinsically, which you cannot claim this theory of "want" is not.
It's no different than saying massive bodies want to be near each other (prioritized in terms of mass1 x mass2 / distance) because they keep exerting force that trends toward that outcome. It's no different than saying that liquids "want" to hold together, they just don't want it very strongly, or that rivers "want" to erode shorelines. With base organisms, it's just input -> output, and the function processing them was created by selection pressures, but there is nothing distinguishing the actual actions on a proximate level from ones that are ultimately self-destructive, because the self-destruction only happens later and on that basis, with no further information being required, cannot be used for establishing what was going on inside the base organism at a proximate level to cause the output.
Yes. And?
I'm not seeing how this contradicts anything I said. In fact it supports what I said by recognizing the necessity for a directionality that precedes (and is a prerequisite for) any kind of sentient desire or "wants."
@purpleworm@hexbear.net addressed this really well and gave a thoughtful, completely correct response. Not much more for me to say on it.
I think you're splitting hairs here between ever so slightly different aspects what I have been calling directionality. Desires or "wants" by definition require a mind capable of having a want or desire. Where you say "it really is a kind of want but not mentally because there is no mind yet" then that's simply not the kind of "want" we are talking about here, the thing that a self-aware (mind-possessing) AI would have if it were genuinely self aware and possessing of a mind. Everything else really is just an appearance of want and is a result of what I've been calling directionality. What you're talking about as the mindless "need to avoid death to continue" is still just the mindless non-intelligent and non-sentient directionality of evolution. And to specifically address this piece:
But it is part of the world (dialectics ftw!). There is a tension between inside and outside the individual cell (and also a tension between the "self" and "outside the self" of a sentient mind which is addressed further down, but this is not the same thing as the the tension between the cell and the world, as proven by the fact we aren't aware of all our cells and frequently kill them by doing such things as scratching) but the cell still isn't the most basic unit of replication in evolution, that would be the gene. Strands of RNA or DNA. Genes (often but not always) use cells as part of the vehicle for their replication, and either way they are still just chemicals reacting with the environment they exist within. There's no more intentionality behind what they do than there is behind, say, a magnet clinging to a fridge. That magnet does not "want" to cling to your fridge, like genes, it is reacting to it's environment and this will be true regardless of where you draw the boundary between the "self" of the magnet and "the outside world." To actually desire something the way we are talking about here requires the complexity of a brain capable of producing a mind.
Agreed. The emergent property of the mind and sentience comes out of the complexity of the interaction of the firing of neurons in a brain and the world they exist within, at least in all likelhood. We still don't know exactly what produces our ability to experience, where exactly qualia originate (i.e. why we aren't just philosophical zombies) but I think most neuroscientists (and philosophers who work on this stuff) would agree, as I do too, that without an outside non-self world for those neurons to interact with, there would be no actual mind. Even that the mind is a drawing of the distinction between self and non-self. But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase "and with the outside world" when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.
Um, yeah... that's pretty much what my argument was for the necessity of any genuine AI to have wants and desires, those "wants" necessarily would have had to have been there built in for it to even become AI.
Disagree. Again, if you want to split hairs on exactly where it is in that ladder of complexity that self-awareness arises, or where in the fuzzy chain we can draw a line between organisms capable of self-awareness vs those not, or even exactly what constitutes self-awareness then feel free. But a thing having an actual desire as something genuinely experienced, it requires some sense of selfhood for that experience to happen to.