this post was submitted on 01 Dec 2025
451 points (93.3% liked)

Showerthoughts

38368 readers
671 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

Looks so real !

top 50 comments
sorted by: hot top controversial new old
[–] LuigiMaoFrance@lemmy.ml 10 points 19 hours ago* (last edited 19 hours ago) (1 children)

We don't know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit.
We cannot know who or what possesses consciousness. We struggle to even define it.

[–] UnderpantsWeevil@lemmy.world 6 points 16 hours ago (1 children)

digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution

No they don't. Digital networks don't act in any way like a electro-chemical meat wad programmed by DNA.

Might as well call a helicopter a hummingbird and insist they could both lay eggs.

We cannot know who or what possesses consciousness.

That's sophism. You're functionally asserting that we can't tell the difference between someone who is alive and someone who is dead

[–] yermaw@sh.itjust.works 2 points 10 hours ago (2 children)

I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I'm the only one. The people around me look and act and appear conscious, but I'll never know.

[–] UnderpantsWeevil@lemmy.world 1 points 53 minutes ago* (last edited 51 minutes ago) (1 children)

I dont think we can currently prove that anyone other than ourselves are even conscious.

You have to define consciousness before you can prove it. I might argue that our definition of consciousness is fuzzy. But not so fuzzy that "a human is conscious and a rock is not" is up for serious debate.

The people around me look and act and appear conscious, but I’ll never know.

You're describing Philosophical Zombies. And the broad answer to the question of "How do I know I'm not just talking to a zombie?" boils down to "You have to treat others as you would expect to be treated and give them the benefit of the doubt."

Mere ignorance is not evidence of a thing. And when you have an abundance of evidence to the contrary (these other individuals who behave and interact with me as I do, thus signaling all the indications of the consciousness I know I possess) defaulting to the negative assertion because you don't feel convinced isn't skeptical inquiry, its cynical denialism.

The catch with AI is that we have ample evidence to refute the claims of consciousness. So a teletype machine that replicates human interactions can be refuted as "conscious" on the grounds that its a big box full of wires and digital instructions which you know in advance was designed to create the illusion of humanity.

[–] yermaw@sh.itjust.works 1 points 44 minutes ago (1 children)

My point was more "if we cant even prove that each other are sentient, how can we possibly prove that a computer cant be?".

[–] UnderpantsWeevil@lemmy.world 1 points 37 minutes ago

If you can't find ample evidence of human sentience then you either aren't looking or are deliberately misreading the definition of the term.

If you can't find ample evidence that computers aren't sentient, same goes.

You can definitely put blinders on and set yourself up to be fooled, one way or another. But there's a huge difference between "unassailable proof" and "ample convincing data".

[–] gedhrel@lemmy.world 2 points 8 hours ago (1 children)

Really? I know. So either you're using that word wrong or your first principles are lacking.

[–] yermaw@sh.itjust.works 1 points 5 hours ago

Can you prove it to anyone?

[–] bss03@infosec.pub 3 points 16 hours ago

Clair Obscur: Expedition 33Clair Obscur: Expedition to meet the Dessandre Family

[–] HazardousBanjo@lemmy.world 2 points 15 hours ago

I think you'd have less dumb ass average Joes cumming over AI if they could understand that regardless as to whether or not the AI wave crashes and burns, the CEOs who've pushed for it won't feel the effects of the crash.

It reminds me of the reaction of the public to 1896 documentary The Arrival of a Train at La Ciotat Station. https://en.wikipedia.org/wiki/L%27Arriv%C3%A9e_d%27un_train_en_gare_de_La_Ciotat

[–] bampop@lemmy.world 8 points 1 day ago* (last edited 1 day ago) (2 children)

People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don't think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn't believe it experiences or feels anything as I do, even though it claims to do so. Especially if it's based on an LLM, since they are superficial imitations by design.

[–] yermaw@sh.itjust.works 4 points 10 hours ago (1 children)

Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.

[–] bampop@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

Well, that's why we need clones with mind transfer, and to be unconscious during the process. When you wake up you won't know whether you're the original or the copy so why worry

[–] Aggravationstation@feddit.uk 1 points 5 hours ago

But then again, what's the point?

[–] UnderpantsWeevil@lemmy.world 2 points 16 hours ago* (last edited 16 hours ago) (1 children)

You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way

I just don't think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they've discovered.

You might be fooled for a little while, but eventually your curious monkey brain would start poking around the edges and exposing the flaws. At this point, it would not be a question of whether you can continue to be fooled, but whether you strategically ignore the flaws to preserve the illusion or tear the machine apart in disgust.

I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so

People have submitted to less. They've worshipped statues and paintings and trees and even big rocks, attributing consciousness to all of them.

But Animism is a real escoteric faith. You believe it despite the evidence in front of you, not because of it.

I'm putting my money down on a future where large groups of people believe AIs are more than just human, they're magical angels and demons.

[–] bampop@lemmy.world 1 points 8 hours ago* (last edited 7 hours ago) (1 children)

I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.

In its current stage, no. But it's come a long way in a short time, and I don't think we're so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can't judge consciousness from the outside looking in. We know it's a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it's saying that because that's the kind of thing a human would say.

Because all the development has been focused on fakery rather than understanding and replicating consciousness, we're close to the point where we can have a fake consciousness that would fool anyone. It's a worrying prospect, and not just because I won't become immortal by having a machine imitate my behaviour. There's bad actors working to exploit this situation. Elon Musk's attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that's teething troubles, and in time this will turn into a very subtle and pervasive problem for humankind. The intrinsic fakeness of it is a concerning aspect. It's like we're getting a puppet show version of what AI could have been.

[–] UnderpantsWeevil@lemmy.world 1 points 14 minutes ago* (last edited 14 minutes ago)

I don’t think we’re so far from having machines that pass the Turing test 100%.

The Turing test isn't solved with technology, its solved with participants who are easier to fool or more sympathetic to computer output as humanly legible. In the end, it can boil down to social conventions far more than actual computing capacity.

Per the old Inglorious Bastards gag

You can fail the Turing Test not because you're a computer but because you're a British computer.

Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone.

We've ingested a bunch of early 21st century digital markers for English language Western oriented human speech and replicated those patterns. But human behavior isn't limited to Americans shitposting on Reddit. Neither is American culture a static construct. As the spread between the median user and the median simulated user in the computer dataset diverges, the differences become more obvious.

Do we think the designers at OpenAI did a good enough job to keep catching up to the current zeitgeist?

[–] Lightfire228@pawb.social 2 points 18 hours ago (1 children)

I suspect Turing Complete machines (all computers) are not capable of producing consciousness

If that were the case, then theoretically a game of Magic the Gathering could experience consciousness (or similar physical systems that can emulate a Turing Machine)

[–] nednobbins@lemmy.zip 2 points 17 hours ago

Most modern languages are theoretically Turing complete but they all have finite memory. That also keeps human brains from being Turing complete. I've read a little about theories beyond Turing completeness, like quantum computers, but I'm not aware of anyone claiming that human brains are capable of that.

A game of Magic could theoretically do any task a Turing machine could do but it would be really slow. Even if it could "think" it would likely take years to decide to do something as simple as farting.

[–] biotin7@sopuli.xyz 29 points 1 day ago (1 children)

Thank you for calling it an LLM.

[–] finitebanjo@piefed.world 1 points 15 hours ago

Although, if a person knowing the context still acts confused when people complain about AI, its about as honest as somebody trying to solve for circumference with an apple pie.

[–] thethunderwolf@lemmy.dbzer0.com 22 points 1 day ago* (last edited 1 day ago) (1 children)

Painting?

"LLMs are a blurry JPEG of the web" - unknown (I've heard it as an unattributed quote)

I think it originated in this piece by Ted Chiang a couple years ago.

load more comments
view more: next ›