this post was submitted on 19 Feb 2026
371 points (94.3% liked)

Showerthoughts

40630 readers
961 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

To go deeper: some animals act curiously, others with fear, but only a few of them understand what the mirror does and use it to inspect themselves.

top 50 comments
sorted by: hot top controversial new old
[–] Ironfacebuster@lemmy.world 4 points 49 minutes ago

My dog used to stare at me through mirrors, so what does that mean for her intelligence? Hyper intelligent. Red heelers will take over the world.

[–] mriormro@lemmy.zip 2 points 53 minutes ago

I,too, like pulling random shit from my ass.

[–] LurkingLuddite@piefed.social 8 points 8 hours ago (1 children)
[–] cypherpunks@lemmy.ml 1 points 34 minutes ago

from page 7 of Joseph Weizenbaum's Computer Power and Human Reason: From Judgement to Calculation (1976):

screenshot of PDF of page 7: Introductionintimate thoughts; clear evidence that people were conversing withthe computer as if it were a person who could be appropriately andusefully addressed in intimate terms. I knew of course that peopleform all sorts of emotional bonds to machines, for example, to mu-sical instruments, motorcycles, and cars. And I knew from long ex-perience that the strong emotional ties many programmers have totheir computers are often formed after only short exposures to theirmachines. What I had not realized is that extremely short exposuresto a relatively simple computer program could induce powerful de-lusional thinking in quite normal people. This insight led me toattach new importance to questions of the relationship between theindividual and the computer, and hence to resolve to think aboutthem,3. Another widespread, and to me surprising, reaction to theELIZA program was the spread of a belief that it demonstrated ageneral solution to the problem of computer understanding of natu-ral language. In my paper, I had tried to say that no general solutionto that problem was possible, ie., that language is understood onlyin contextual frameworks, that even these can be shared by peopleto only a limited extent, and that consequently even people are notembodiments of any such general solution. But these conclusionswere often ignored, In any case, ELIZA was such a small and simplestep. Its contribution was, if any at all, only to vividly underline whatmany others had long ago discovered, namely, the importance ofcontext to language understanding. The subsequent, much moreelegant, and surely more important work of Winograd in computercomprehension of English is currently being misinterpreted just asELIZA was. This reaction to ELIZA showed me more vividly thananything I had seen hitherto the enormously exaggerated attribu-tions an even well-educated audience is capable of making, evenstrives to make, to a technology it does not understand. Surely, Ithought, decisions made by the general public about emergent tech-nologies depend much more on what that public attributes to suchtechnologies than on what they actually are or can and cannot do. If,as appeared to be the case, the public's attributions are wildly mis-conceived, then public decisions are bound to be misguided and

a pdf of the whole book is available here

[–] minnow@lemmy.world 62 points 1 day ago (1 children)

The mirror test is frequently cited as a means of testing sentience.

OP I think you hit the nail on the head.

[–] Aerosol3215@piefed.ca 9 points 1 day ago (1 children)

Based on the fact that most people don't see their interaction with the LLM as gazing into the mirror, am I being led to believe that most people are not sentient???

[–] Zorque@lemmy.world 15 points 1 day ago (1 children)

Based entirely on the opinions of people on niche social media platforms, yes.

[–] Garbagio@lemmy.zip 1 points 1 hour ago

Mmm, I mean, sentience is a gradient, right? The mirror test is where we decided to draw the line, but there are more places to do so. My toddler thinks his favorite toy has some level of agency, just as by all accounts his older sister thinks Bluey has an identity. Depending on the test, there are developmental markers where we statistically transition from failing to succeeding. Another way to look at it is that for each developmental range, we can develop tests that challenge how we perceive autonomy, which some people succeed at and others fail. We may have just inadvertently developed a test that a significant amount of adults are just going to fail as human beings.

[–] Horsecook@sh.itjust.works 14 points 1 day ago (3 children)

There’s been an extensive marketing campaign to convince people that LLMs are intelligent. I wouldn’t call someone a subhuman for assuming there is some truth to that.

Of those that understand what an LLM is, I think you can divide them into two groups, the honest, and the dishonest. Honest people see no use in a bullshit generator, a lying machine. They see it as a perversion of technology. Dishonest people have no such objection. They might even truly see intelligence in the machine, as its outputs don’t differ substantially from their own. If you view language as a means to get what you want, rather than a means to convey factual information, then lying is acceptable, desirable, intelligent. It would be difficult for such a person to differentiate between coherent but meaningless bullshit, and a machine with agency making false statements to pursue its own goals.

[–] certified_expert@lemmy.world 2 points 18 hours ago (1 children)

I disagree about the dichotomy. I think you can (1) understand what LLMs actually are. (2) See the value of such technology.

In both cases being factual (not being deceived) and not being malicious (not attempting to deceive others)

I think a reasonable use of these tools is as a "sidekick" (you being the main character). Some tasks can be assigned to it so you save some time, but the thinking and the actual mental model of what is being done shall always be your responsibility.

For example, LLMs are good as an interface to quickly lookup within manuals, books, clarify specific concepts, or find the proper terms for a vague idea (so that you can research the topic using the appropriate terms)

Of course, this is just an opinion. 100% open to discussion.

[–] BanMe@lemmy.world 2 points 10 hours ago

I think of it like a nonhuman character, like a character in a book I'm reading. Is it real? No. Is it compelling? Yes. Do I know exactly what it'll do next? No. Is it serving a purpose in my life? Yes.

It effectively attends to my requests and even feelings but I do not reciprocate that. I've got decades of sci-fi leading me up to this point, the idea of interacting with humanoid robots or AI has been around since my childhood, but it's never involved attending to the machine's feelings or needs.

We need to sort out the boundaries on this, the delusional people who are having "relationships" with AI, getting a social or other emotional fix from it. But that doesn't mean we have to categorize anyone who uses it as moronic. It's a tool.

[–] naught101@lemmy.world 1 points 20 hours ago

Marketing is a valid use for AI (because bullshit was always thewod anyway)

[–] Etterra@discuss.online 2 points 1 day ago

Wait, let's hear OP out.

[–] schnurrito@discuss.tchncs.de 46 points 1 day ago (2 children)

Except it's not my reflection, it's a reflection of millions if not billions of humans.

[–] Carnelian@lemmy.world 30 points 1 day ago

Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!

load more comments (1 replies)
[–] callyral@pawb.social 14 points 1 day ago (1 children)

Related: is there a name for "question bias"?

Like asking ChatGPT if "is x good?", and it would reply "Yes, x is good." but if you ask "is x bad?" it would reply "Yes, x is bad, you're right."

[–] TheOctonaut@mander.xyz 18 points 1 day ago (1 children)

It's just a leading question.

[–] yeahiknow3@lemmy.dbzer0.com 7 points 1 day ago* (last edited 1 day ago)

It is not a leading question. The answer just happens to be meaningless.

Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.

Just think about the fact llms are basically trying to simulate reddit posts and then think again about using them.

[–] ameancow@lemmy.world 9 points 1 day ago

Not nearly enough people understand this about our current models of AI. Even people who think they understand AI don't understand this, usually because they have been talking to themselves a lot without realizing it.

[–] GuyIncognito@lemmy.ca 10 points 1 day ago (1 children)

I checked with that other gorilla who lives in the bathroom and he says you're wrong

[–] certified_expert@lemmy.world 2 points 1 day ago (1 children)

lol, Is that the same gorilla that you see in other bathrooms? Or (like me) you meet a new gorilla every time you wash your hands?

[–] GuyIncognito@lemmy.ca 5 points 1 day ago

I think he's the same guy. I used to try to bust him up but he just kept multiplying into more pieces and then coming back whole every time I saw a new mirror, so I eventually gave up

[–] lowspeedchase@lemmy.dbzer0.com 8 points 1 day ago (6 children)

This is a great one - although I never see animals worshipping the mirror.

[–] Rippin_Farts_And_Or_Breaking_Hearts@lemmy.org 7 points 1 day ago (3 children)

I've got a duck that prefers to dance in front of a chrome bumper or glass door where he can see his reflection than to go after any potential mates. Possibly he's worshipping the mirror. Possibly he's just really vain.

[–] gravitas_deficiency@sh.itjust.works 6 points 1 day ago (1 children)

Sounds like he’s ducking handsome

[–] Rippin_Farts_And_Or_Breaking_Hearts@lemmy.org 3 points 1 day ago* (last edited 1 day ago)

He is actually. When he washes himself he's blinding white. And when he dances he gets a little feather pompadour on the top of his head.

Nothing wrong with a handsome duck taking a little self affirmation time - he knows his value, he can't look away.

[–] Hux@lemmy.ml 5 points 1 day ago (1 children)

I love the idea of a bunch of woodland creatures (completely unaware of what mirrors are) investing heavily—and aggressively—in mirrors and mirror-related technology.

[–] lowspeedchase@lemmy.dbzer0.com 5 points 1 day ago (1 children)

Squirrels (lemmings) pooling all of their nuts at the alter, lol.

[–] Hux@lemmy.ml 3 points 1 day ago

Investor Squirrel 1: “All you have to do is gather your acorns right here, and they will instantly double in value!

Investor Squirrel 2: “Bro’, we’re so sentient!!!

load more comments (4 replies)
[–] Sunschein@piefed.social 4 points 1 day ago (1 children)
[–] SchwertImStein@lemmy.dbzer0.com 3 points 1 day ago (1 children)
[–] Sunschein@piefed.social 1 points 10 hours ago

Yeah. Figured it was a good visual representation of seeing an AI version of ourselves in a mirror.

[–] Lost_My_Mind@lemmy.world 4 points 1 day ago (1 children)

Huh.....so what you're saying is that mirrors are actually AI.

THAT MAKES A LOT OF SENSE!!! EVERYBODY COVER YOUR MIRRORS!!!

[–] Whats_your_reasoning@lemmy.world 1 points 18 hours ago

Laughs in vampire

[–] lost_faith@lemmy.ca 2 points 1 day ago

And here I am practising my smile in the mirror (like that golden retriever)

[–] Abyssian@lemmy.world -1 points 19 hours ago (3 children)

Except when you leave several LLMs able to communicate with one another they will, on their own, with no instructions, including creating their own unique social norms.

https://neurosciencenews.com/ai-llm-social-norms-28928/

[–] JcbAzPx@lemmy.world 1 points 7 hours ago

That's basically a very advanced flea circus.

[–] certified_expert@lemmy.world 7 points 18 hours ago (1 children)

This is nothing else than the reflexion I am talking about. It is not a reflexion of you, the person chatting with the bot, but an "average" reflexion of what humanity has expressed in the data llms have been trained on.

If a mirror is placed in front of another mirror, the "infinite tunnel" only exists in the mind of the observer.

[–] Abyssian@lemmy.world 1 points 2 hours ago

Neuroscience News isn't a conspiracy rag. It's an article summarizing a research paper, which they link to. So many of you don't bother to read actual research and instead repeat whatever you've seen online about how things work. More parrot than the AI.

[–] SaharaMaleikuhm@feddit.org 2 points 19 hours ago (1 children)
[–] Abyssian@lemmy.world 1 points 2 hours ago

The article is summarizing a research paper, which it links to. Neuroscience News isn't a conspiracy rag.

load more comments
view more: next ›