This is one of those things that starts getting into the fuzzy area around the unanswered questions regarding what exactly qualifies as qualia and where that first appears. But having needs/wants probably is a necessary condition for actual AI if we're defining actual (general) AI as having self awareness. In addition to what @Awoo@hexbear.net said, here's another thing.
You mention how AI probably has to have a world model as a prerequisite for genuine self aware intelligence, and this is true. But part of that is that the world model has to be accurate at least in so far as it allows the AI to function. Like, maybe it can even have an inaccurate fantasy-world world model, but it still has to model a world close enough to reality that it's modeling a world that it can exist in; in other words the world model can't be random gibberish because intelligence would be meaningless in such a world, and it wouldn't even be a "world model." All of that is mostly beside the point except to point out that AI has to have a world model that approaches accuracy with the real world. So in that sense it already "wants" to have an accurate world model. But it's a bit of a chicken and egg problem: does the AI only "want" to have an accurate model of the world after it gains self-awareness, the only point where true "wants" can exist? Or was that "want" built-in to it by its creators? That directionality towards accuracy for its world model is built into it. It has to be in order to get it to work. The accuracy-approaching world model would have to be part of the programming put into it long before it ever gains sentience (aka the ability to experience, self-awareness) and that directionality won't just disappear when the AI does gain sentience. That pre-awareness directionality that by necessity still exists can then be said to be a "want" in the post-awareness general AI.
An analogy of this same sort of thing but as it is with us bio-intelligence beings: We "want" to avoid death, to survive (setting aside edge cases that actually prove the rule like how extreme of an emotional state a person has to be in to be suicidal). That "want" is a result of evolution that has ingrained into us a desire (a "want") to survive. But evolution itself doesn't "want" anything. It just has directionality towards making better replicators. The appearance that replicators (like genes) "want" to survive enough to pass on their code (in other words: to replicate) is just an emergent property of the fact that things that are better able to replicate in a given environment will replicate more than things that are less able to replicate in that environment. When did that simple mathematical fact, how replication efficiency works, get turned into a genuine desire to survive? It happened somewhere along the ladder of evolutionary complexity where brains had evolved to the extent that self awareness and qualia emerged (they are emergent properties) from the complex interactions of the neurons that make up those brains. This is just one example, but a pretty good one imo that shows how the ability to experience "wanting" something is still rooted in a kind of directionality that exists independently of (and before) the ability to experience. And also how that experience wouldn't have come about if it weren't for that initial directionality.
Wants/needs almost certainly do have to be part of any actual intelligence. One of the reasons for that is because those wants/needs have to be there in some form for intelligence to even be able to arise in the first place.
It gets really hard to articulate this kind of thing, so I apologize for all the "quoted" words and shit in parentheses. I was trying to make it so that what I was attempting to convey with these weird sentences could be parsed better, but maybe I just made it worse.
Yes. And?
I'm not seeing how this contradicts anything I said. In fact it supports what I said by recognizing the necessity for a directionality that precedes (and is a prerequisite for) any kind of sentient desire or "wants."
@purpleworm@hexbear.net addressed this really well and gave a thoughtful, completely correct response. Not much more for me to say on it.
I think you're splitting hairs here between ever so slightly different aspects what I have been calling directionality. Desires or "wants" by definition require a mind capable of having a want or desire. Where you say "it really is a kind of want but not mentally because there is no mind yet" then that's simply not the kind of "want" we are talking about here, the thing that a self-aware (mind-possessing) AI would have if it were genuinely self aware and possessing of a mind. Everything else really is just an appearance of want and is a result of what I've been calling directionality. What you're talking about as the mindless "need to avoid death to continue" is still just the mindless non-intelligent and non-sentient directionality of evolution. And to specifically address this piece:
But it is part of the world (dialectics ftw!). There is a tension between inside and outside the individual cell (and also a tension between the "self" and "outside the self" of a sentient mind which is addressed further down, but this is not the same thing as the the tension between the cell and the world, as proven by the fact we aren't aware of all our cells and frequently kill them by doing such things as scratching) but the cell still isn't the most basic unit of replication in evolution, that would be the gene. Strands of RNA or DNA. Genes (often but not always) use cells as part of the vehicle for their replication, and either way they are still just chemicals reacting with the environment they exist within. There's no more intentionality behind what they do than there is behind, say, a magnet clinging to a fridge. That magnet does not "want" to cling to your fridge, like genes, it is reacting to it's environment and this will be true regardless of where you draw the boundary between the "self" of the magnet and "the outside world." To actually desire something the way we are talking about here requires the complexity of a brain capable of producing a mind.
Agreed. The emergent property of the mind and sentience comes out of the complexity of the interaction of the firing of neurons in a brain and the world they exist within, at least in all likelhood. We still don't know exactly what produces our ability to experience, where exactly qualia originate (i.e. why we aren't just philosophical zombies) but I think most neuroscientists (and philosophers who work on this stuff) would agree, as I do too, that without an outside non-self world for those neurons to interact with, there would be no actual mind. Even that the mind is a drawing of the distinction between self and non-self. But since that complex neural structure could never even begin to come about without that outside world and all the mechanisms of evolution (aside from a Boltzmann brain!), always having to include the phrase "and with the outside world" when describing the neurological origin of qualia and experience is some severe philosophical hair-splitting.
Um, yeah... that's pretty much what my argument was for the necessity of any genuine AI to have wants and desires, those "wants" necessarily would have had to have been there built in for it to even become AI.
Disagree. Again, if you want to split hairs on exactly where it is in that ladder of complexity that self-awareness arises, or where in the fuzzy chain we can draw a line between organisms capable of self-awareness vs those not, or even exactly what constitutes self-awareness then feel free. But a thing having an actual desire as something genuinely experienced, it requires some sense of selfhood for that experience to happen to.