this post was submitted on 03 May 2026
272 points (98.9% liked)
Microblog Memes
11437 readers
2288 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Absolutely no NSFL content.
- Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
- No advertising, brand promotion, or guerrilla marketing.
RELATED COMMUNITIES:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI isn’t conscious. Feedback loops and subsequent responses in LLMs are grounded purely on training datasets, thus any “internal dialogue” emulated by a LLM is just echoes from someone else’s data.
Some philosophers, namely Bentham IIRC, have argued that a human being without any experiences would have no intelligence. If you raised a human in a test tube and removed all their sensing organs, but otherwise allowed their mind to develop through the stages of maturity, would they have anything interesting to think? Would they have a sense of self, or an imagination?
I've always tended to agree with the argument that a human mind's feedback loops and subsequent responses are grounded purely on training datasets. Without a childhood of some kind, I suspect that you cannot have a person.
I find Myself often frustrated with the quality of arguments against AI qualia because they appeal to statements about the human mind which are quite controversial in the field of philosophy, and I am frequently on the other side of those statements than the person making them. I have yet to hear an argument against AI qualia that identifies an absolute ontological difference between humans and LLMs other than complexity.
Also, I'm uninterested in debating AI consciousness. I only want to discuss AI qualia. I don't think consciousness matters very much, qualia is much more important.
Any non factual philosophical argument is debatable. We could forever discuss if AI models could construct sensations and thought from perceptions, but we would then need to ignore the fact that models don’t, and cannot do, that, simply because there is no way for them to learn from direct experience as a whole, i.e. outside of a particular session, and without being “forcibly coerced”, i.e. they require specific refinement mechanisms to temporary “memorize” external instructions, which in LLM engineering just means to extend their context.
This all doesn’t even take into account that models are, in essence, non deterministic, and given the same input, there’s no guarantee that subsequent outputs will be the same. In other words, today Claude may tell you that summer sunsets make it happy, tomorrow it would say that they make it sad, etc.
Anyway, there’s barely any debate in academia, as in computer scientists, about AI being sentient or giving clues of qualia. Maybe a paper here and there, little more than curiosities. Outside of it? Yeah, sure, barely science fiction, and pretty uninteresting unless we are talking about conspiracy theories or just wild speculation.
I'm concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
Regardless, it's all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline... We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we're 100% certain we've solved all of these problems.