this post was submitted on 01 Dec 2025
456 points (93.0% liked)

Showerthoughts

39252 readers
971 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Looks so real !

top 50 comments
sorted by: hot top controversial new old
[–] Thorry@feddit.org 70 points 1 month ago* (last edited 1 month ago) (2 children)

Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

load more comments (2 replies)
[–] HowAbt2day@futurology.today 35 points 1 month ago

I had a poster in ‘86 that I wanted to come alive.

[–] biotin7@sopuli.xyz 29 points 1 month ago (1 children)

Thank you for calling it an LLM.

load more comments (1 replies)
[–] Kyrgizion@lemmy.world 22 points 1 month ago (3 children)

As long as we can't even define sapience in biological life, where it resides and how it works, it's pointless to try and apply those terms to AI. We don't know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

[–] daniskarma@lemmy.dbzer0.com 3 points 1 month ago (1 children)

We don't know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

I don't think LLMs will create intelligence, but I don't think we need to solve everything about human intelligence before having machine intelligence.

[–] Perspectivist@feddit.uk 7 points 1 month ago (2 children)

Though in the case of consciousness - the fact of there being something it's like to be - not only don't we know what causes it or how it works, but we have no way of measuring it either. There's zero evidence for it in the entire universe outside of our own subjective experience of it.

load more comments (2 replies)
load more comments (2 replies)
[–] thethunderwolf@lemmy.dbzer0.com 22 points 1 month ago* (last edited 1 month ago) (1 children)

Painting?

"LLMs are a blurry JPEG of the web" - unknown (I've heard it as an unattributed quote)

I think it originated in this piece by Ted Chiang a couple years ago.

[–] Jhex@lemmy.world 13 points 1 month ago

The example I gave my wife was "expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school"

[–] LuigiMaoFrance@lemmy.ml 12 points 1 month ago* (last edited 1 month ago) (1 children)

We don't know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit.
We cannot know who or what possesses consciousness. We struggle to even define it.

[–] UnderpantsWeevil@lemmy.world 5 points 1 month ago (1 children)

digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution

No they don't. Digital networks don't act in any way like a electro-chemical meat wad programmed by DNA.

Might as well call a helicopter a hummingbird and insist they could both lay eggs.

We cannot know who or what possesses consciousness.

That's sophism. You're functionally asserting that we can't tell the difference between someone who is alive and someone who is dead

[–] yermaw@sh.itjust.works 4 points 1 month ago (6 children)

I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I'm the only one. The people around me look and act and appear conscious, but I'll never know.

load more comments (6 replies)
[–] ji59@hilariouschaos.com 11 points 1 month ago (9 children)

Except ... being alive is well defined. But consciousness is not. And we do not even know where it comes from.

[–] rockerface@lemmy.cafe 13 points 1 month ago (3 children)

Viruses and prions: "Allow us to introduce ourselves"

load more comments (3 replies)
[–] peopleproblems@lemmy.world 5 points 1 month ago (8 children)

Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.

I'm vastly oversimplifying and I'm not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can't scare away an insect.

It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.

load more comments (8 replies)
load more comments (7 replies)
[–] MercuryGenisus@lemmy.world 10 points 1 month ago (2 children)

Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative"

The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time. Maybe it's a good thing they aren't like us.

[–] Sconrad122@lemmy.world 14 points 1 month ago (1 children)

Alternative way to phrase it, we don't train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, "they" is anthropomorphizing LLM AI much more than it deserves, it's not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data

[–] Aeri@lemmy.world 4 points 1 month ago

Sometimes. I feel like LLM technology and it's relationship with humans is a symptom of how poorly we treat each other.

[–] Kolanaki@pawb.social 8 points 1 month ago* (last edited 1 month ago)

The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time

Elon is trying really with Grok, tho.

[–] Tracaine@lemmy.world 8 points 1 month ago (1 children)

I don't expect it. I'm going to talk to the AI and nothing else until my psychosis hallucinates it.

load more comments (1 replies)
[–] nednobbins@lemmy.zip 8 points 1 month ago (15 children)

I can define "LLM", "a painting", and "alive". Those definitions don't require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it's alive.

I'm not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it's like Ebert claiming that, "Video games can never be art".

[–] khepri@lemmy.world 4 points 1 month ago* (last edited 1 month ago) (4 children)

Absolutely everything requires assumptions, even our most objective and "laws of the universe" type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

load more comments (4 replies)
load more comments (14 replies)
[–] bampop@lemmy.world 8 points 1 month ago* (last edited 1 month ago) (4 children)

People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don't think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn't believe it experiences or feels anything as I do, even though it claims to do so. Especially if it's based on an LLM, since they are superficial imitations by design.

[–] yermaw@sh.itjust.works 4 points 1 month ago (3 children)

Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.

load more comments (3 replies)
load more comments (3 replies)
[–] j4k3@lemmy.world 7 points 1 month ago* (last edited 1 month ago)

The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even tried to direct that. Quite the opposite, everything has been trying to kill you from the very start.

[–] qyron@sopuli.xyz 7 points 1 month ago (1 children)

It's achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.

[–] HeyThisIsntTheYMCA@lemmy.world 4 points 1 month ago

Or boredom and starvation

[–] Lembot_0005@lemy.lol 7 points 1 month ago

Good showering!

[–] Luisp@lemmy.dbzer0.com 6 points 1 month ago

The Eliza effect

[–] CrabAndBroom@lemmy.ml 5 points 1 month ago (1 children)

I heard someone describe LLMs as "a magic 8-ball with an algorithm to nudge it in the right direction." I dunno how accurate that is, but it definitely feels like that sometimes.

[–] khepri@lemmy.world 3 points 1 month ago

I like that, but I'd put it the other way around I think, it's closer to an algorithm that, at each juncture, uses a magic 8 ball to determine which of the top-n most likely paths it should follow at that moment.

[–] Jankatarch@lemmy.world 4 points 1 month ago

Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.

load more comments
view more: next ›