this post was submitted on 03 May 2026
220 points (98.7% liked)

Microblog Memes

11431 readers
1834 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

you are viewing a single comment's thread
view the rest of the comments
[–] turdas@suppo.fi 29 points 14 hours ago* (last edited 14 hours ago) (3 children)

The actual article isn't nearly as stupid as the tweet makes it seem. I recommend giving it a read. It's behind a shitty paywall, but if you use Firefox's reader mode (Ctrl-Alt-R, or the little papper icon to the right side of the address bar) as soon as the page loads, you can read it.

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren't conscious, then perhaps consciousness isn't as important as we thought it was:

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers.

Some people will surely contest his claim that LLMs are as competent as evolved organisms. There's definitely a bit of AI boomerism at play here (we have benchmarks that show just how incompetent LLMs can be), but I don't think that invalidates his point, because LLMs can be very competent in the domains they're trained to be competent in -- they just aren't AGI.

[–] SkaveRat@discuss.tchncs.de 31 points 14 hours ago* (last edited 14 hours ago) (3 children)

Man, those conversations are eye roll inducing

I like the shift away from "are they conscious" towards "what's a way to define consciousness?"

Because that's the actual important question. And literally nobody can answer it. Any discussion is more philosophy than hard science

The most interesting part is the last paragraph

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

[–] pennomi@lemmy.world 12 points 14 hours ago (2 children)

It’s very difficult to define, isn’t it?

If I were to give it a shot, I’d say that consciousness is akin to proprioception - the ability to know the state of oneself and understand how actions taken will change that state. It has very little to do with intelligence, just the “sense of being”.

Or maybe in other words, object persistence (but for yourself) is all it takes in my opinion. Even the simplest of animals could be considered conscious by this definition.

[–] queerlilhayseed@piefed.blahaj.zone 9 points 13 hours ago (2 children)

I think, when we finally do have a generally-accepted definition of consciousness, we will be deeply unsettled by how simple it is. How unprofound. Like a magic trick after you know how it works. And I think it will require us to think hard about what to do with animals and software that have it.

[–] turdas@suppo.fi 2 points 10 hours ago (1 children)

Personally I'm in the "consciousness is an illusion and every time you go to bed a different person wakes up in the morning" camp.

[–] Jaycifer@piefed.social 4 points 5 hours ago (1 children)

I would consider this to be two separate, semi-related concepts asserted together, one that consciousness is an illusion, and one that you are a different person each day.

The first point draws many questions; consciousness is an illusion of what? What mechanism causes the illusion? How does it cause it? Why does the illusion exist? And you may note that you could replace illusion in those questions with consciousness and be left in a similar (though still distinct) place. So simply calling consciousness an illusion seems to me to kick the can down the road without actually addressing the problem.

As for being a different person after a lapse in awareness, I’d like to take it a step further and say that you could be considered a new person with every change in moment. It’s easy enough to look back 10 years and say “yeah, that’s a younger me, but they’re not the same as me I can just see the path that led to where I am now.” Getting closer, you may feel different today compared to yesterday depending on various factors (sleep, diet, events), but are you a different person because you slept and had a lapse of awareness, or because the state of your mind and thoughts have shifted? When your internal monologue (or equivalent thought) asks “what is this guy talking about?” Is it not thinking “what” in a brand new context given the words it is responding to, forming a new beginning to a thought that puts the mind in a unique state primed to then enter a new state of “is?” And if the mind is in a unique state of novelty, could the person attached to the mind be considered distinct from the person that existed before?

There is a reason the word revelation exists, it indicates when a person has a novel thought that changes their perspective or way of thinking, altering who they are. Would they not be a new person despite being aware of the process of their change? Due to the above points I don’t think new personhood only occurs at sleep, but constantly. The rate of change may quicken or slow, but the change is always there.

[–] turdas@suppo.fi 1 points 4 hours ago

By consciousness being an illusion I mean that we place great value on the uninterrupted continuation of our consciousness, but I think it's likely that it (exactly as you suggest) only really exists in the moment. The illusion would then be the illusion that consciousness is uninterrupted, when in reality you're almost constantly recreating yourself from memory.

This would, incidentally, make us concerningly similar to current AI models.

Of course I have no way of actually knowing any of this. It's just what I'm betting on, because otherwise I think it's really hard to explain any unconsciousness (be it sleep, general anesthesia, suspended animation or the Star Trek transporter) as anything short of death. My belief "solves" this problem by rejecting the whole premise of uninterrupted consciousness.

[–] trem@lemmy.blahaj.zone 5 points 12 hours ago (1 children)

I feel like that's exactly why we don't have a generally-accepted definition of consciousness. Western ethics assigns special protection to whatever is conscious, so it is convenient to come up with a definition of consciousness, which excludes groups you want to exploit.

Tale as old as time, or at least the conscious idea of time. Whatever consciousness is, we are it. Those humans over there though? Who's to say they aren't sub-humans? Isn't it our job to enlighten them and also take their land and food and things and selves?

[–] halfapage@lemmy.world 3 points 12 hours ago (1 children)

eh, I'm nitpicking, but you could argue that even microcontrollers are conscious then, because they "know" their state and act as they were set to based on that "knowledge"

we are clueless on what consciousness and knowing is, if we weren't we would know by now lol

[–] pennomi@lemmy.world 3 points 6 hours ago

Yeah, I’m not entirely sure that microcontrollers aren’t conscious. If insects (and maybe plants and fungi) are conscious, a lot of mundane stuff we’ve built could technically be as well.

I think we need to get away from the idea that consciousness is special or rare.

[–] Godwins_Law@lemmy.ca 1 points 9 hours ago (1 children)

Blindsight by Peter Watts is a great sci Fi novel about consciousness

[–] SkaveRat@discuss.tchncs.de 0 points 6 hours ago (1 children)

it's on my to-read list.

Right now listening to Children Of Strife. Whose series is also quite deep into conciousness and sapience

[–] khannie@lemmy.world 1 points 4 hours ago

I have that but haven't started it yet. The second on the series is one of my all time favourites.

"We're going on an adventure"

[–] FinjaminPoach@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

Thank you for the comment, i feel silly for not linking the article when people will probably want to read it.

My thoughts:

His argument is basically that LLMs are able to do things we previously thought only conscious beings would be capable of doing, and so, if they aren’t conscious, then perhaps consciousness isn’t as important as we thought it was

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

[–] turdas@suppo.fi 2 points 6 hours ago

Seems like an "evil" and dangerous talking point. To me, the value of consciousness isn't in ita evolutionary efficiency.

It's not a question of the value of consciousness, it's a question of its necessity. If an unconscious "zombie" can be, to an external observer, indistinguishable from a conscious being, then that means we've been overestimating the importance of consciousness for intelligence. Like Dawkins says in the article, there could be unconscious aliens out there who are nonetheless as intelligent as (or more intelligent than) humans. This isn't a new concept -- it's been explored many times in scifi -- but AI is now bringing the question from the realm of philosophy to the real world.

I know people working in AI insist otherwise but I see talking with LLM not as them thinking, but as them selecting the right combination of data that correctly continues a conversation.

This is less true than it ever was with reasoning models. Some of the latest reasoning models don't necessarily even reason in English anymore but rather an eclectic mix of languages. The next step after that is probably going to be running the reasoning in latent space (see e.g. Coconut), which basically means the model skips the language generation layer altogether and feeds lower-level state back into itself. Basically it is getting closer and closer to what most humans consider "thinking".

But even besides reasoning models, I believe LLMs aren't as different from human language production as many people think. The human speech centre, in a way, also just selects the right combination of data to continue a conversation. It frequently even hallucinates (we call this "speaking before thinking") and makes stupid mistakes (we provoke these with trick questions like those on the Cognitive Reflection Test). There's also some fascinating experiments in people who have had the connection between their brain hemispheres severed that really suggest our speech centre is just making things up as it goes along.

[–] FaceDeer@fedia.io 2 points 13 hours ago (1 children)

As LLMs have developed and have been able to cram more and more "thoughtlike" behaviour into smaller RAM and less computation, I've steadily become less impressed with human brains. It seems like the bits we think most highly of are probably just minor add-ons to stuff that's otherwise dedicated to running our big complicated bodies in a big complicated physics environment. If all you want to have is the part that philosophizes and solves abstract problems and whatnot then you may not actually need all that much horsepower.

I'm thinking consciousness might also turn out to be something pretty simple. Assuming consciousness is even a particular "thing" in the first place and not just a side effect of being able to predict how other people will behave.

[–] yeahiknow3@lemmy.dbzer0.com 4 points 11 hours ago* (last edited 11 hours ago) (4 children)

Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

[–] SkaveRat@discuss.tchncs.de 1 points 6 hours ago

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn't make any sense when people bring it up. because both theorems do not have anything to do with the areas it's applied to. I understand why people think it does, but it just doesn't

[–] psycotica0@lemmy.ca 1 points 7 hours ago

You're going to have to do a lot more to justify the leap from Godel's Incompleteness and the Halting Problem to "digital is limited, analog is not", because neither of those things have anything to do with digital processes at all, and in fact both came about before we'd invented digital computers.

To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.

[–] turdas@suppo.fi 2 points 8 hours ago

I don't see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.

Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don't really know anything about what they would actually do in the brain. It becomes an "a wizard did it" explanation.

So in the end, we just don't know.

[–] FaceDeer@fedia.io 1 points 11 hours ago (1 children)

I'm still awaiting a widely accepted method of actually measuring "consciousness." It's a conveniently nebulous property.

And simply defining it as something computers can't do is even more convenient.

[–] yeahiknow3@lemmy.dbzer0.com 2 points 11 hours ago* (last edited 10 hours ago) (1 children)

That doesn’t change the fact that I am conscious.

Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.

[–] FaceDeer@fedia.io 1 points 10 hours ago (1 children)

Sure, you say you're conscious. I can get an LLM to say it's conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?

[–] yeahiknow3@lemmy.dbzer0.com 3 points 10 hours ago* (last edited 10 hours ago) (1 children)

This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.

We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.

Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.

[–] FaceDeer@fedia.io 1 points 10 hours ago (1 children)

Exactly, which is why it's IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don't even really know what that means.

I don't personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that's purely based on vibe, it's not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.

[–] yeahiknow3@lemmy.dbzer0.com 0 points 10 hours ago* (last edited 10 hours ago) (1 children)

It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.

  1. This table isn’t conscious.

This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).

That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.

  1. ChatGPT isn’t conscious.

Ditto. It would be irrational for me to believe otherwise given the strong evidence.

That you “don’t know for sure” is an implied disclaimer for every scientific claim.

If the evidence is ambiguous, we say so. Regarding ChatGPTs consciousness, the evidence is unambiguous.

  1. I am conscious.

This is a non-inferential claim that I know through direct contact with reality. It is a priori.

[–] Micromot@piefed.social 0 points 9 hours ago* (last edited 9 hours ago) (1 children)

This is pretty much what Descartes meant with "cogito ergo sum". The only thing you can be sure are 100% real, are your thoughts

[–] psycotica0@lemmy.ca 0 points 7 hours ago

Right, your own thoughts. So I can be sure I'm conscious, but you commenting "I know I'm conscious" on here doesn't tell me anything about your consciousness. The robot can do that, and does.