this post was submitted on 05 Feb 2026
174 points (85.4% liked)

Showerthoughts

40297 readers
575 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Sunsofold@lemmings.world 8 points 2 days ago (2 children)

No love for LLMs from me but, flatly, no. Asking a question is soliciting a response. Their response is not the one you wanted, but it is solicited. It would be like you asking for a dick pic from someone, the penis of whom you were interested in seeing, and them responding with a generated image from one of the unfiltered image generators.
The intellectual equivalent to an unsolicited dick pic is probably spam advertising. A piece of media is being sent to someone who did not request it, by someone who does not care if the recipient does not want to receive it.

[–] Mohamed@lemmy.ca 1 points 2 days ago

Totally agree. It's no where near the level of a dick pic - a dick pic is sexual harassment.

[–] raspberriesareyummy@lemmy.world -2 points 1 day ago (1 children)

We've gone into this in detail in the other threads. If you send someone LLM output, your a shitty friend/colleague/whatever.

[–] dream_weasel@sh.itjust.works 1 points 1 day ago

And yet still in no way equivalent to a dick pic. Equivalence here is "raspberriesareyummy doesnt like that" which doesn't exactly meet muster, even for a shower thought.

[–] mushroommunk@lemmy.today 87 points 3 days ago (11 children)

I read recently in an article something that struck me as the heart of it and fits.

"Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read." - Dan Brooks

[–] stepan@lemmy.cafe 39 points 3 days ago (2 children)

That's something I've attempted to say more than once but never formulated this well.

Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I'm looking at a well written technical document or a crap resembling it. It's especially hard when I'm very new to the topic.

Paradoxically, AI slop made me actually read the official documentation much more, as it's now easier than to do this AI-checking. And also personal blogs, where it's usually clearly visible they are someone's beloved little digital garden.

[–] saltesc@lemmy.world 11 points 3 days ago

That's something I've attempted to say more than once but never formulated this well.

Did you try ChatGPT?

[–] mushroommunk@lemmy.today 4 points 3 days ago (1 children)

Funny how people who's job it is to write can sometimes write gooder than us common folk.

[–] stepan@lemmy.cafe 3 points 2 days ago

funny for the writer elite maybe >:(

[–] raspberriesareyummy@lemmy.world 16 points 3 days ago (1 children)

I had this "shower" thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction ("please never ever do that again")

[–] mushroommunk@lemmy.today 8 points 3 days ago

AI and someone who uses AI missed nuance? This is my surprised face. (- _ -⁠)

[–] fizzle@quokk.au 6 points 3 days ago

The most annoying part - the recipients email client probably offered to summarise with an LLM. My bot makes slop for your bot to interpret.

Its the most inefficient form of communication ever devised. Please decompress my prompt 1000x so the recipient can compress it back to my prompt.

I will say though, even a chatgpt email tells you a lot about the sender.

[–] jjpamsterdam@feddit.org 3 points 2 days ago

Thank you for this great answer! It's something I intuitively felt but couldn't put my finger on with the same surgical precision you just did.

[–] Yaky@slrpnk.net 3 points 2 days ago

The question I ask is "How do you justify saving your time at expense of others' time?"

Haven't heard a good answer, just mumbling "it can be set to be less verbose..."

[–] BedSharkPal@lemmy.ca 4 points 3 days ago

Damn. Nailed it.

load more comments (5 replies)
[–] irelephant@anarchist.nexus 6 points 2 days ago

If I wanted to ask chatgpt I would have asked it myself 

[–] Yeller_king@reddthat.com 1 points 1 day ago

It might mean you've asked a trivial/routine question you easily could have answered yourself. In the same way someone might just send you a Google response prior to chatgpt.

[–] ulterno@programming.dev 1 points 1 day ago

Where's Draconic_MEO when you asked for it?

[–] GreenBeanMachine@lemmy.world 1 points 1 day ago (1 children)

Read AI output, check the sources to confirm it's true. Reply in your own words.

That's the polite variant, but it still involves the use of LLM, and the assumption that machine learning is AI (it's not, despite what the tech bros tell you). People using LLMs should be treated like people who pick their nose and eat their boogers at the dinner table. :p

[–] Jankatarch@lemmy.world 3 points 2 days ago (2 children)

Sending SOMEONE ELSE'S dick pick at that.

[–] Etterra@discuss.online 1 points 2 days ago

Sending a shitty AI representation of a dick pic.

there's that, too...

[–] DupaCycki@lemmy.world 11 points 2 days ago (1 children)

I think I'd prefer an unsolicited dick pick.

[–] owenfromcanada@lemmy.ca 3 points 2 days ago (4 children)

I don't quite get the equivalence there. I'd say an LLM response is more on par with responding with a link to lmgtfy.com or something.

The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn't otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they're trying to sell you their MLM garbage.

[–] Pyr_Pressure@lemmy.ca 2 points 2 days ago

Or just sending the link to chatgpt.

"Don't ask me, just ask chatgpt! What am I, your boss or something?!"

load more comments (3 replies)
[–] CombatWombatEsq@lemmy.world 12 points 3 days ago (2 children)

To me, it is exactly the same as people who linked lmgtfy.com or responded RTFM. If you send me an LLM summary, I’m assuming you’re claiming that I’m the asshole for bothering you. If I am being lazy, I’ll take the hint. If I’m struggling to find a way to do the research myself, either because I’m not sure how to properly research it myself, or because LLMs have made the internet nigh-unusable, I’m gonna clock you as a tremendous asshole.

[–] raspberriesareyummy@lemmy.world 9 points 3 days ago (2 children)

I think there's an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).

For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking "Did you just send me LLM output?"

It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/

load more comments (2 replies)
[–] Kolanaki@pawb.social 7 points 3 days ago (4 children)

RTFM

This one really sucked post 2001 or so when everything stopped coming with a fucking manual to read. What M and I supposed to R, guy?

[–] mech@feddit.org 1 points 1 day ago

Them: Read The Fucking Manual!

The Manual

                     The unset builtin treats attempts to unset array
                     subscripts @ and * differently depending on whether
                     the array is indexed or associative, and differently
                     than in previous versions.
              •      Arithmetic commands ( ((...)) ) and the expressions
                     in an arithmetic for statement can be expanded more
                     than once.
              •      Expressions used as arguments to arithmetic
                     operators in the [[ conditional command can be
                     expanded more than once.
              •      The expressions in substring parameter brace
                     expansion can be expanded more than once.
              •      The expressions in the $((...)) word expansion can
                     be expanded more than once.
              •      Arithmetic expressions used as indexed array
                     subscripts can be expanded more than once.
              •      test -v, when given an argument of A[@], where A is
                     an existing associative array, will return true if
                     the array has any set elements.  Bash-5.2 will look
                     for and report on a key named @.
              •      The ${parameter[:]=value} word expansion will return
                     value, before any variable-specific transformations
                     have been performed (e.g., converting to lowercase).
                     Bash-5.2 will return the final value assigned to the
                     variable.
              •      Parsing command substitutions will behave as if
                     extended globbing (see the description of the shopt
                     builtin above) is enabled, so that parsing a command
                     substitution containing an extglob pattern (say, as
                     part of a shell function) will not fail.  This
                     assumes the intent is to enable extglob before the
                     command is executed and word expansions are
                     performed.  It will fail at word expansion time if
                     extglob hasn't been enabled by the time the command
                     is executed.```  
load more comments (3 replies)
[–] morto@piefed.social 10 points 3 days ago (1 children)

Somehow, people don't get that if we ask something to them, it's because we want their personal interpretation of it, otherwise, we would use the internet as well

load more comments (1 replies)
[–] Etterra@discuss.online 1 points 2 days ago

Reply: tell ChatGPT I said thanks.

[–] letraset@feddit.dk 7 points 3 days ago (2 children)

Receiving LLM output as an answer to a question, is the equivalent of getting a voice reply to the question:

"Quick question, are you free on Saturday afternoon?"

[–] jjpamsterdam@feddit.org 3 points 2 days ago (2 children)

I absolutely cannot stand the kind of people who answer a brief and simple yes or no question with a wall of text or a two minute voice note. If it's that complicated, because your pet chihuahua just had a stroke and you then fell in love head over heels with the veterinarian and that you're currently at the airport to fly away for your spontaneous honeymoon, just say no and tell me about the details in person.

load more comments (2 replies)

Downloading audio message... Duration: 45 seconds

[–] CallMeAnAI@lemmy.world 6 points 3 days ago (2 children)

I mean on one hand, it's a shower thought. On the other, this is a really dumb shower thought.

load more comments (2 replies)
[–] DeathByBigSad@sh.itjust.works 4 points 3 days ago

At least a dick can be useful to create life... an LLM can never become life

[–] friend_of_satan@lemmy.world 4 points 3 days ago (1 children)

Pretty sure my boss did this to me today.

load more comments (1 replies)
[–] Blaster_M@lemmy.world 4 points 3 days ago (1 children)

Well, it's common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can't find.

That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English "[ so and so ] is [ whatever ], isn't it?" vs. "[ so and so ] is not [ whatever ], is it?" - Because Turkish has "isn't it?" (değil mi? = not so?) but it doesn't have "is it?", mostly because "to be" is used much different in the language.

A google result wouldn't help me at all - the pure grammar answer is "there's no form of 'is it' to be coupled with a negative assumption/assertion". But does a language construct exist to transport the nuance of "the speaker assumes that something is NOT [soandso], and wants to ask confirmation" vs. the speaker assuming that something IS [soandso], and asking for confirmation.

I still don't know the answer, but it appears this nuance can't be expressed in Turkish without describing around it in a longer sentence.

[–] radicallife@lemmy.world 3 points 3 days ago

But I have my phone's texting set permanently to respond with AI so I never have to talk to anyone.

[–] sparkles@piefed.zip 1 points 2 days ago (1 children)

I get it, it’s obnoxious and annoying, bereft of deep thought or courtesy. Qualities the senders must possess. But I could go the rest of my life without seeing unsolicited genitals tbh.

.. as could I go the rest of my life without seeing unsolicited LLM garbage in my message :)

load more comments
view more: next ›