this post was submitted on 01 Dec 2025
451 points (93.3% liked)

Showerthoughts

38368 readers
686 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Looks so real !

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Thorry@feddit.org 69 points 2 days ago* (last edited 2 days ago) (2 children)

Ah but have you tried burning a few trillion dollars in front of the painting? That might make a difference!

load more comments (2 replies)
[–] BarneyPiccolo@lemmy.today 1 points 1 day ago

Like a GIF?

[–] MercuryGenisus@lemmy.world 10 points 1 day ago (3 children)

Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative"

The best way to differentiate computers to people is we haven't taught AI to be an asshole all the time. Maybe it's a good thing they aren't like us.

load more comments (3 replies)
[–] HowAbt2day@futurology.today 34 points 2 days ago

I had a poster in ‘86 that I wanted to come alive.

[–] qyron@sopuli.xyz 7 points 1 day ago (1 children)

It's achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.

Or boredom and starvation

[–] Kyrgizion@lemmy.world 22 points 2 days ago (6 children)

As long as we can't even define sapience in biological life, where it resides and how it works, it's pointless to try and apply those terms to AI. We don't know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

[–] finitebanjo@piefed.world 2 points 1 day ago

100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain't the algorithm made to guess the next word.

load more comments (5 replies)
[–] Jhex@lemmy.world 13 points 2 days ago

The example I gave my wife was "expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school"

[–] nednobbins@lemmy.zip 8 points 2 days ago (15 children)

I can define "LLM", "a painting", and "alive". Those definitions don't require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it's alive.

I'm not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it's like Ebert claiming that, "Video games can never be art".

[–] khepri@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

Absolutely everything requires assumptions, even our most objective and "laws of the universe" type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let alone philosophy or social sciences.

[–] nednobbins@lemmy.zip 1 points 1 day ago (1 children)

Defining "consciousness" requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it's essentially an undefined term.

With such a vague definition of what "consciousness" is, there's no logical way to argue that an AI does or does not have it.

[–] 2xar@lemmy.world 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

Your logic is critically flawed. By your logic you could argue that there is no "logical way to argue a human has consciousness", because we don't have a precise enough definition of consciousness. What you wrote is just "I'm 14 and this is deep" territory, not real logic.

In reality, you CAN very easily decide whether AI is conscious or not, even if the exact limit of what you would call "consciousness" can be debated. You wanna know why? Because if you have a basic undersanding of how AI/LLM works, than you know, that in every possible, concievable aspect in regards with consciusness it is basically between your home PC and a plankton. None of which would anybody call conscious, by any definition. Therefore, no matter what vague definition you'd use, current AI/LLM defintiely does NOT have it. Not by a longshot. Maybe in a few decades it could get there. But current models are basically over-hyped thermostat control electronics.

[–] nednobbins@lemmy.zip 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

I'm not talking about a precise definition of consciousness, I'm talking about a consistent one. Without a definition, you can't argue that an AI, a human, a dog, or a squid has consciousness. You can proclaim it, but you can't back it up.

The problem is that I have more than a basic understanding of how an LLM works. I've written NNs from scratch and I know that we model perceptrons after neurons.

Researchers know that there are differences between the two. We can generally eliminate any of those differences (and many research do exactly that). No researcher, scientist, or philosopher can tell you what critical property neurons may have that enable consciousness. Nobody actually knows and people who claim to know are just making stuff up.

load more comments (1 replies)
load more comments (14 replies)
[–] finitebanjo@piefed.world 24 points 2 days ago (19 children)

And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.

[–] chicken@lemmy.dbzer0.com 18 points 2 days ago

That kind of painting seems more likely to come alive

[–] forrgott@lemmy.zip 7 points 2 days ago (2 children)

New fear unlocked!

... What the hell, man?!

ಥ_ಥ

load more comments (2 replies)
load more comments (17 replies)
[–] ji59@hilariouschaos.com 11 points 2 days ago (9 children)

Except ... being alive is well defined. But consciousness is not. And we do not even know where it comes from.

[–] rockerface@lemmy.cafe 13 points 2 days ago (3 children)

Viruses and prions: "Allow us to introduce ourselves"

load more comments (3 replies)
[–] peopleproblems@lemmy.world 5 points 2 days ago (8 children)

Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.

I'm vastly oversimplifying and I'm not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can't scare away an insect.

It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.

load more comments (8 replies)
load more comments (7 replies)
[–] CrabAndBroom@lemmy.ml 5 points 2 days ago (1 children)

I heard someone describe LLMs as "a magic 8-ball with an algorithm to nudge it in the right direction." I dunno how accurate that is, but it definitely feels like that sometimes.

[–] khepri@lemmy.world 3 points 1 day ago

I like that, but I'd put it the other way around I think, it's closer to an algorithm that, at each juncture, uses a magic 8 ball to determine which of the top-n most likely paths it should follow at that moment.

[–] rirus@feddit.org -3 points 1 day ago

No, not at all.

[–] Tracaine@lemmy.world 8 points 2 days ago (1 children)

I don't expect it. I'm going to talk to the AI and nothing else until my psychosis hallucinates it.

load more comments (1 replies)
[–] Luisp@lemmy.dbzer0.com 6 points 2 days ago

The Eliza effect

[–] j4k3@lemmy.world 7 points 2 days ago* (last edited 11 hours ago)

The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even tried to direct that. Quite the opposite, everything has been trying to kill you from the very start.

[–] Jankatarch@lemmy.world 4 points 2 days ago

Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.

[–] Lembot_0005@lemy.lol 7 points 2 days ago

Good showering!

load more comments
view more: ‹ prev next ›