this post was submitted on 03 May 2026
255 points (98.9% liked)

Microblog Memes

11437 readers
2403 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 

I'm pulling the "twitter is a microblog" rule even though twitter is pretty mega now, hope that's ok.

you are viewing a single comment's thread
view the rest of the comments
[–] yeahiknow3@lemmy.dbzer0.com 6 points 16 hours ago* (last edited 16 hours ago) (4 children)

Brains aren’t impressive because of their compute (which is both immense and absurdly efficient) or their ability to predict the future (technically the main function of evolved minds). They’re impressive because they’re conscious. The fact that organic brains can also engage in hierarchical abstraction, which no digital computer (or Turing machine) can do by definition, is icing on the cake.

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

[–] SkaveRat@discuss.tchncs.de 2 points 11 hours ago (1 children)

(The halting problem and Godel’s incompleteness and Traski’s undefinability theorems all seem to suggest that analog, not digital computing is responsible for consciousness.)

I hear that argument from time to time, and I never found a source for it. I want to understand the original claim. Because it doesn't make any sense when people bring it up. because both theorems do not have anything to do with the areas it's applied to. I understand why people think it does, but it just doesn't

[–] yeahiknow3@lemmy.dbzer0.com 1 points 1 hour ago

The simplest way to understand this problem is as follows.

  1. Analog computation is not digitally reducible. (Brains are analog computers.)

  2. Turing’s infamous Halting Problem.

I can write more about this and point you to more technical discussions if you want.

[–] psycotica0@lemmy.ca 1 points 12 hours ago (1 children)

You're going to have to do a lot more to justify the leap from Godel's Incompleteness and the Halting Problem to "digital is limited, analog is not", because neither of those things have anything to do with digital processes at all, and in fact both came about before we'd invented digital computers.

To me this comment sounds like when popsci gets ahold of a few sciency words and suddenly decides everything is crystal vibrations universal harmonics string theory quantum tunneling aligning resonance with those around you.

[–] yeahiknow3@lemmy.dbzer0.com 1 points 1 hour ago* (last edited 1 hour ago)

The situation is the following.

  1. Brains are analog computers, which are digitally irreducible.
  2. There are stringent limitations on Turing machines (digital computers),
  3. We can’t extract semantics from syntax, and so…

We’ll probably need analog computation, currently in its infancy, to get artificial (inorganic) consciousness.

I study metaethics and philosophy of mathematics. These problems are real, and I am being honest with you.

[–] turdas@suppo.fi 2 points 13 hours ago (1 children)

I don't see why there would be any fundamental difference between analog and digital computing. Digital computers can emulate analog computing, and I doubt consciousness arises from having theoretically infinite decimal precision, because in practice analog systems cannot use infinite precision either. Analogs (heh!) of the halting problem and the theorems you mention also exist for analog computing.

Quantum effects in the brain are a slightly more plausible explanation for consciousness, but currently they teeter on magical thinking because we don't really know anything about what they would actually do in the brain. It becomes an "a wizard did it" explanation.

So in the end, we just don't know.

[–] yeahiknow3@lemmy.dbzer0.com 1 points 1 hour ago (1 children)

I don't see why there would be any fundamental difference between analog and digital computing.

Then why not take a course on Theoretical Computer Science? Or do you not care about the differences?

[–] turdas@suppo.fi 1 points 1 hour ago (2 children)

I have a master's degree in computer science.

Obviously I meant "I don't see why there would be any fundamental difference between analog and digital computing [when it comes to consciousness]."

[–] yeahiknow3@lemmy.dbzer0.com 1 points 50 minutes ago* (last edited 33 minutes ago)

The consciousness thing… I would be delighted if we could get a digital system to be conscious. Three reasons it’s probably impossible.

  1. We would need to figure out how to collapse semantics into syntax, since digital systems are purely syntactic and consciousness deals with semantics. The consensus is that it’s impossible.
  2. The only examples of conscious systems we have are analog and heavily substrate-dependent — so, making neurons out of any artificial material breaks their functionality.
  3. As Gödel said, “the mind is incapable of mechanizing all of its intuitions.” The first incompleteness theorem means that no computational procedure could exist to determine whether propositions are valid, provable, or even equivalent, and that no matter how you formulate the number-theoretic axioms, a human mathematician would always have insights (for instance, about whether a Diophantine equation has a solution) that are both clearly “true” and obviously unprovable.

It looks like digital systems are too constrained.

Add the Chinese room thought experiment into the mix and it really becomes impossible to see how a Turing machine (by itself, without analog components) could ever be conscious.

[–] FaceDeer@fedia.io 0 points 16 hours ago (1 children)

I'm still awaiting a widely accepted method of actually measuring "consciousness." It's a conveniently nebulous property.

And simply defining it as something computers can't do is even more convenient.

[–] yeahiknow3@lemmy.dbzer0.com 3 points 16 hours ago* (last edited 16 hours ago) (1 children)

That doesn’t change the fact that I am conscious.

Also, I never said computers can’t be conscious. I said that digital computers (Turing machines) probably can’t. Quantum and analog computers have no such theoretical constraints and they’re far, far more prevalent given that they’re found in every living creature.

[–] FaceDeer@fedia.io 0 points 16 hours ago (1 children)

Sure, you say you're conscious. I can get an LLM to say it's conscious too. This is why we need some method for measuring it. Otherwise how can I tell which of you is telling the truth?

[–] yeahiknow3@lemmy.dbzer0.com 4 points 16 hours ago* (last edited 15 hours ago) (1 children)

This is called the problem of other minds. Of course I can’t be certain about the consciousness of others. I can only be certain about my own.

We do have a way of measuring the correlates of consciousness. But we have no clue how to detect the presence of subjective experience using quantitative methods.

Philosophy departments (which is where any discovery on this front will originate) are heavily defunded. If you’re waiting for physicists or biologists to figure this out you’ll be waiting even longer.

[–] FaceDeer@fedia.io 0 points 15 hours ago (1 children)

Exactly, which is why it's IMO a bit presumptuous to say with confidence that humans are conscious while LLMs are categorically not conscious. We don't even really know what that means.

I don't personally think LLMs are conscious, at least not yet or not to the same degree that humans are. But that's purely based on vibe, it's not something I can know. We need to figure out what consciousness really is and how to measure it before we can say we know this with any certainty.

[–] yeahiknow3@lemmy.dbzer0.com 1 points 15 hours ago* (last edited 1 hour ago) (1 children)

It is not presumptuous at all. Inference to the best explanation is how you know (almost) anything.

  1. This table isn’t conscious.

This is my justified belief. No inferential claim is guaranteed and all objective claims are inferential (which is why scientific claims aren’t absolute).

That said, I have strong reasons to think that tables aren’t conscious. They might be, but I’m epistemically compelled to believe otherwise.

  1. ChatGPT isn’t conscious.

Ditto. It would be irrational for me to believe otherwise given the strong evidence.

That you “don’t know for sure” is an implied disclaimer for every scientific claim.

If the evidence is ambiguous, we say so. Regarding ChatGPT, the evidence is unambiguous.

  1. I am conscious.

This is a non-inferential claim that I know through direct contact with reality. It is a priori.

[–] Micromot@piefed.social 0 points 14 hours ago* (last edited 14 hours ago) (1 children)

This is pretty much what Descartes meant with "cogito ergo sum". The only thing you can be sure are 100% real, are your thoughts

[–] psycotica0@lemmy.ca -1 points 12 hours ago (1 children)

Right, your own thoughts. So I can be sure I'm conscious, but you commenting "I know I'm conscious" on here doesn't tell me anything about your consciousness. The robot can do that, and does.

[–] Micromot@piefed.social -1 points 11 hours ago

This is just the stuff you do in philosophy class. There is no right answer really. You can never be sure of something being conscious or even be sure that it exists in reality. We can just react to what we perceive.