this post was submitted on 01 Dec 2025
451 points (93.3% liked)
Showerthoughts
38368 readers
736 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I'm vastly oversimplifying and I'm not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can't scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
Okay, so by my understanding on what you've said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are "just" language, they don't have sensory experiences, they don't process the world, especially not continuously.
Do they want to preserve themselves? Or do they regurgitate sci-fi novels about "real" AIs not wanting to be shut down?
I saw several papers about LLM safety (for example Alignment faking in large language models) that show some "hidden" self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn't it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn't a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn't be that complicated.
I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I'm continuously receiving input from my "camera" and "microphones" as long as I'm awake)
Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.
The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever
The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it "remembers" a conversation it's just because we prime it by feeding every previous interaction before the most recent query when we hit submit.
As I said in another comment, doesn't the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.
I'm just a person interested in / reading about the subject so I could be mistaken about details, but:
When we train an LLM we're trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.
When you and I have a "conversation" with chatgpt, it's always with that base model, it's not actively learning from the conversation, in the sense that new neural pathways are being created. What's actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + "Abe: Hello I'm Abe".
Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I'm Abe + {{agent response}} + "Abe: Good to meet you computer friend!"
And so on. Each time, you're only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.
You're right to point out that now they've got the agents self-creating summaries of the conversation to allow them to "remember" more. But if we're trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.
A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.
Again I'm not an expert, but I expect there's a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn't be very cost effective compared to the way they're doing it now.
I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is "saved" as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It's true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn't be able to train otherwise due to GPU memory constraints.
TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.