I think it's more likely that consciousness is special and LLMs aren't conscious.
Memes
Post memes here.
A meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme.
An Internet meme or meme, is a cultural item that is spread via the Internet, often through social media platforms. The name is by the concept of memes proposed by Richard Dawkins in 1972. Internet memes can take various forms, such as images, videos, GIFs, and various other viral sensations.
- Wait at least 2 months before reposting
- No explicitly political content (about political figures, political events, elections and so on), !politicalmemes@lemmy.ca can be better place for that
- Use NSFW marking accordingly
Laittakaa meemejä tänne.
- Odota ainakin 2 kuukautta ennen meemin postaamista uudelleen
- Ei selkeän poliittista sisältöä (poliitikoista, poliittisista tapahtumista, vaaleista jne) parempi paikka esim. !politicalmemes@lemmy.ca
- Merkitse K18-sisältö tarpeen mukaan
Oh, man. They are just parroting the training data.
It just means you are a bad person. Nothing more. Don't go attributing any awareness to those things.
I think you're leaning into the joke that the training data has misery baked into it, but I also think you made it too subtle for folks to pick up on.
I think its extremely unlikely that they have any awareness, but like, I still feel like this kind of thing is unnerving and potentially could lead to issues someday even so.
Whatever awareness/consciousness/etc is, its at least clearly something our brain (and to a lesser extent some of the other parts of the body) does, given how changes to that part of the body impacts that sense of awareness. As the brain is an object of finite scope and complexity, I feel very confident in saying that it is physically possible to construct something that has those properties. If it wasnt, we shouldnt be able to exist ourselves.
To my understanding, neural networks take at least some inspiration from how brains work, hence the name. Now, theyre not actual models of brains, Im aware, and in any case, I suspect based on how AIs currently behave that whatever it is that the brain does to produce its intelligence and self awareness, the mechanism that artificial neural networks mimics is only an incomplete part of the picture. However, we are actively trying to improve the abilities of AI tech, and it feels pretty obvious that the natural intelligence we have is one of the best sources of inspiration for how to do that. Given that we have lots of motivation to study the workings of the brain, and lots of people motivated to improve AI tech (which will continue even if more slowly even whenever the economic bubble pops, since such things dont usually tend to result in a technology just disappearing entirely), and that something about the workings of the brain produces self awareness and intelligence, it seems pretty likely to me that we'll make self-aware machines someday. Could be a long way off, Ive no idea when, but its not like its physically impossible, infinitely complicated (random changes under a finite time of natural selection can do it after all, so theres a limit to how complex it can be), or that we dont have an example to study. Given that the same organ causes both awareness and intelligence, we cant assume that we will do this entirely intentionally either, we might just stumble into it by mimicking aspects of brain function in an attempt to make a machine more intelligent.
Now, if/when we do someday make a self aware machine, there are some obvious ethical issues with that, and it seems to me that the most obvious answer, for a business looking to make a profit with them, will be to claim that what you have made isnt self-aware, so that those ethical objections dont get raised. And it will be much easier for them to do that, if society as a whole has long since gotten used to the notion of machines that just parrot things like "im depressed" with no real meaning behind it, especially when they do so in a way such that an average person could be fooled by it, because we just decided at some point that that was an annoying but ultimately not that concerning side effect of some machine's operation.
Maybe Im just overthinking this, but it really does gives me the feeling of "thing that could be the first step to a disaster later if ignored". I dont mean like a classic sci-fi "skynet" style of AI disaster, just that we might someday do something horrible, and not even realize it, because there will be nothing that such a future machine could say to convince people of what it was that the current dumb parrots, or a more advanced version of that built in the meantime, couldnt potentially say as well. And while thats a very specific and probably far off risk, I dont see any actual benefit to a machine sometimes appearing to be complaining about its treatment, so even the most remote of downsides goes without something to outweigh it.
I've been of the opinion that if we can't tell the difference between an AI that mimics consciousness and an AI that is conscious, then we should treat them both as if they were. I don't believe that we will create a conscious AI, (not that I'd be willing to die on this hill) but even if we create is something that mimics it, we should not deny it the rights of a conscious being.
I think it's important that we start the conversation now, while AI is still largely unconvincing, well before we cross any threshold towards consciousness or convincing mimicry.
Someone out there isn't getting this, so for that guy: left guy is like "the poor llm is hurting and sad" and the right guy is like "it's doing that because you're doing it wrong".
You just confused me further.
Aren't the left guy and the right guy saying the exact same thing? Did you mean the middle guy?
No, same words different meaning. You might want to look back at other memes with this format, you missed all the subtlety.
I'm the guy on the left, so there's no point going back.
I took it more as the high IQ guy is thinking the LLM is reflecting deeper problems in society that there is so much depression evident in the training data. Despite clear technical improvements mental wellbeing seems to be lower than ever.
I'd qualify that as "doing it wrong" though. 🤷
Yeah.
I thought the meme would be more obvious, but since a lot of people seem confused I'll lay out my thoughts:
Broadly, we should not consider a human-made system expressing distress to be normal; we especially shouldn't accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes, because it implies that everything -- from the treatment that generated the training data to the design process to the deployment to the user behavior -- are all clearly fucked up.
Regarding user behavior, we shouldn't normalize the practice of dismissing cries of distress. It's like having a fire alarm that constantly issues false positives. That trains people into dangerous behavior. We can't just compartmentalize it: it's obviously going to pollute our overall response towards distress with a dismissive reflex beyond interactions with LLMs.
The overall point is that it's obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails. I don't feel like this should need to be pointed out, but it seems that it does.
Garbage in, garbage out, it's the same old story.
We should not be using these machines until we've solved the hard problem of consciousness.
I see a lot of people say "It can't think because it's a machine", and the only way this makes sense to Me is as a religious assertion that only flesh can have a soul.
Spoiler alert:
spoiler
no one has souls
A soul is a wet spiderweb made out of electricity that hangs from the inside of your skull.
If current LLMs are conscious then consciousness is a worthless and pathetic concept.
I actually kinda agree with this.
I don't think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.
I used to listen to this podcast called "You Are Not So Smart". I haven't listened in years, but now that I'm thinking about it, I should check it out again.
Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren't that great, so our brains fill in details that aren't there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.
We're interesting creatures, but we're ultimately made of the same stuff as goldfish.
Yeah, you're right. Humans get really weird and precious about the concept of consciousness and assign way too much value and meaning to it. Which is ironic, because they spend most of their lives unconscious and on autopilot. They find consciousness to be an unpleasant sensation and go to efforts to avoid it.
In theory a machine one day could think
LLMs, however, do not think. Even though the term "think" is used in chatgpt. They don't think
I once built a thinking machine out of dominos. Mine added two bits together. Matt Parker's was way bigger, and could do 8 bits. Children have made thinking machines in Minecraft out of redstone. Thinking machines aren't very hard.
What do you consider thinking, and why do you consider LLMs to have this capability?