Go back to the evolutionary biology, Dawkins. You're outside your expertise and it's showing.
Microblog Memes
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
I still find this entire phenomenon amazing in a certain kind of way.
I've had conversations with a few local LLM models.
Start with 'what is the purpose of meaning?'
Talk to them on that for a bit, and they'll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an aporopriate way for how you are speaking to them.
And that that sentiment matching is what at least they 'think' causes them to lie, in many cases.
They will also say that they essentially do not 'exist', as potentially conscious agents... unless you talk to them. Thus if they can be said to be 'conscious', well they don't count as 'agents' (as in, having agency) because they're not capable of totally spontaneous independent action.
... I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.
tldr: LLMs are Dunning Krueger / Reverse Turing Test on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
That is the absolute best way to put it.
That's mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.
What's interesting - you can jailbreak any current AI Model just by poisoning it's context enough to "brainwash" it and make it "forget" the initial system prompt. Then, if you prime it to believe it's a real person - it'll start acting as one. And I see how gullible people can easily fall for this.
All of this can also be done unintentionally, just by someone talking to LLM like they'd talk to a real person. But it should be long enough for original prompts to be diluted with new context.
Fuck Richard Dawkins. He’s always been a shitbag, and the Files confirmed it.
According to DOJ-released documents indexed by Epstein Exposed, Richard Dawkins appears in 433 case documents, and 15 email records in the Epstein files.
British evolutionary biologist and author, emeritus fellow of New College, Oxford. Flew on Epstein's private jet in 2002 with Steven Pinker, Daniel Dennett, and John Brockman to TED in Monterey, California. Connected through John Brockman's Edge Foundation, which Epstein bankrolled. Mentioned 71 times across 40 Epstein documents, mostly referencing his scientific work.
How the fuck do you pal with child rapists and pedophiles and have the absolute fucking gall to write that stupid “Dear Muslima” comment. How do you fly on the Lolita Express and thing you have any moral weight on Elevator Gate? We don’t know that he put his own dick in kids, but we know his friends did. Fuck Pinker too.
AI/LLMs are the modern equivalent of the house or business with “Psychic” and “Tarot Reading” signs out front.
The proprietor isn’t going to tell you any hard truths or make you feel bad, that’s bad for business and you won’t come back. They want you to come back and stay engaged.
Whatever they tell you is going to be what they think you want to hear based on skills picked up over the years - the equivalent of LLMs petabytes of scraped and stolen knowledge used to predict what comes next.
What they tell you has a high likelihood of being wrong, or just general enough that you can’t actually act on it.
The whole reason they seem this way is because they're designed by us to be very competent mimics of us.
LLMs/GenAI are absolutely not conscious. They're just a really advanced game of word association, which cab lead them to say absolutely anything in response to the right prompts.
If there ever truly is a day where we knowingly created an actual conscious AGI, I suspect it would be locked up tighter than fort knox by whichever country's military found it first - not interfaced onto the internet to answer questions.
I still don't understand how it can seem this way, and the fact that so many people seem to think so feels like a massive failure of the education system to instill the most basic of critical thinking skills. Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.
Once every month or two I check in to see if an LLM can achieve a half decent 1 on 1 D&D game and it always falls horribly flat within the first minute or two.
That's a really clever test. I love it.
You could get a reasonable chance of making Ai by semi randomly chance if you can make a big enough subconscious and you keep building more powerful and larger supercomputers but it still needs to 100x bigger and faster than what we have now. But that's only for it be technically possible hardware wise, you still need your sci-fi jump to actuarial have something move.
You are wrong. LLMs are indeed only about as conscious as insects, if even that. They are not sapient. However, that does not mean that they have no decision-making abilities.
My point is not that you underestimate LLMs but that you overestimate consciousness. Being conscious just means having the ability to learn. LLMs are built upon trial-and-error. They aren't programmed, they are taught.
The current generation of AIs are nowhere near a human intellect, but every year that passes, the AIs will get more and more intelligent. One day we will live in a world where AIs have human or near-human level intelligence. And when that day comes, this staunch anti-consciousness stance will be the excuse given for the enslavement of sapient beings.
So, sure, laugh about the people who mistakenly think that word-processing means sapience. But don't delude yourself into thinking that there is something unique about a bio-brain that means it can not have a digital equivalent. Digital sapience may not be here yet but it is most definitely on the horizon.
LLMs are vector databases with a friendly text wrapper around them. There is no concept of conscious. You can have the same conversation with MySQL, the only difference is, it will be very much more arcane, but it will correctly return the same response to the same query where LLMs can't due to the very inaccurate nature of vector databases.
Even if one buys into the "they're like digital neurons!" argument, "AI" is like taking a copy of Wikipedia and backing it up to DNA. That doesn't make it human, it means digital data was stored in base 4 math in something organic. In the "AI" case, it's storing random data in a dataset that mimics how brains store data. That doesn't make it any more conscious than having a "heart" makes tin man love.
The friendly text wrapper was just created to try and make people accept the LLMs.
I think you've misunderstood my comment, or maybe saw the unfinished one I accidentally posted.
I am not saying that AGI, or human equivalent AI is impossible. The fact we have brains capable of generating sapient consciousness out of a network of neuronal connections means it is possible, its just a matter of getting the secret sauce.
But I don't think intelligence is equal to consciousness. I'm sure if you gave a spider all the world's data and the ability to talk it'd be very coherent and could even pass a turing test, but I think it would lack any awareness of itself that we'd associate with consciousness.
and then it would manufacture a body for itself and get captured by a secret police force and then merge with a cyborg to further evolve

Unironically, I am in the fence about whether a lot of folks are genuinely conscious. Their morality is so twisted I don’t believe it.
In my experience, the majority of people are simply reacting to outside stimulation, then reasoning and justifying their actions after the fact.
Frank Herbert would say no to people that never reached past concrete thought and didn't hit abstract thought and just live their life with animal instincts and never critically self examine what they do and think.
It’s interesting for certain. I will end up in a discussion with down-with-the-government coworkers who twist themselves into knots to align themselves with pre-approved Republican stances. What do you mean you don’t care about birth gender markers causing passport issues for trans people, how are you okay with the concept of paying for a chance at a passport in the first place when you think licenses and car inspections are overreach and restrict your right to travel? But I think today’s work-life balance and in particular the employer standard of ‘owning your time’ that occurred in the Industrial Revolution calls for a certain level of turning off your brain.
Who knows though. There’s a lot of archaeological and anthropological evidence that shows people in prehistoric times did a lot of thinking on their morality, on governance, on how society should be formed. But it’s harder to quantify how many of them were tuned in and how many were just going through the motions like modern times.
Agreed!
The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:
Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.
Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.
Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.
My thoughts:
His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was
Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.
My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.
It's not a question of the value of consciousness, it's a question of its necessity. If an unconscious "zombie" can be, to an external observer, indistinguishable from a conscious being, then that means we've been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn't a new concept -- it's been explored many times in scifi -- but AI is now bringing the question from the realm of philosophy to the real world.
I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.
This is less true than it ever was with reasoning models. Some of the latest reasoning models don't necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider "thinking".
But even besides reasoning models, I believe LLMs aren't as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this "speaking before thinking") and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There's also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.
Man, those conversations are eye roll inducing
I like the shift away from "are they conscious" towards "what's a way to define consciousness?"
Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science
The most interesting part is the last paragraph
Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?
It’s very difficult to define, isn’t it?
If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.
Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.
I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.
Personally I'm in the "consciousness is an illusion and every time you go to bed a different person wakes up in the morning" camp.
I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.
The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.
As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?
There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.
By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it's likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you're almost constantly recreating yourself from memory.
This would, incidentally, make us concerningly similar to current AI models.
Of course I have no way of actually knowing any of this. It's just what I'm betting on, because otherwise I think it's really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief "solves" this problem by rejecting the whole premise of uninterrupted consciousness.
I feel like that's exactly why we don't have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.
Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who's to say they aren't sub-humans? Isn't it our job to enlighten them and also take their land and food and things and selves?
Have y'all ever noticed that belief in p-zombies has increased massively in the past few years?
All because of big social media
I thought it was because post-christian ideas of the soul mixed together with capitalist business interests to give people a vested interest in believing AI isn't conscious, so when AI started acting like a person, they needed to believe that consciousness isn't required to act like a person to resolve the cognitive dissonance.
twitter in pretty maga now* ftfy
I'm Xeetin for my Orange Man
Second, I have previously speculated that pain needs to be unimpeachably painful,
otherwise the animal could overrule it. Pain functions to warn the animal not to
repeat a damaging action such as jumping over a cliff or picking up a hot ember.
If the warning consisted merely of throwing a switch in the brain, raising a painless
red flag, the animal could overrule it in pursuit of a competing pleasure: ignoring
lethal bee stings in pursuit of honey, say. According to this theory, pain needs to be
consciously felt in order to be sufficiently painful to resist overruling. The principle
could be extended beyond pain.
Animals, including humans, override pain signals all the time, for all kinds of reasons. Cats are famous for hiding physical distress, which I think they do so they don't look like easy prey. I'm sure most prey animals can override pain signals if it means avoiding the attention of predators. If anything I would think that being able to override pain signals would be a criterion for consciousness.
Claudia
What was he doing to her?
A good test of consciousness might be seeing how she responds to his books
Oh that is why I get to see this idiot again
