They only believe a chatbot is a person having a conscious experience because they have never explored any curiosity about what it is to be a person having a conscious experience.
It basically just repeats this a handful of times as if it's useful.
They only believe a chatbot is a person having a conscious experience because they have never explored any curiosity about what it is to be a person having a conscious experience.
It basically just repeats this a handful of times as if it's useful.
I don't really think that this is a very productive approach to the issue of ai 'consciousness.' Anthropic has demonstrated that several LLMs have a rudimentary ability to reflect on their internal state during inference. They are an undeniably interesting, literate technology that we don't really fully understand being developed at an increasingly rapid rate
It's not that I think LLMs are conscious, but I do see why a person might come to that conclusion. Calling them crazy, dumb, or unimaginative is kind of insulting. They are interacting with an alien stort of intelligence engineered to keep their attention.
It's especially annoying when a lot of critics in the ai space are so smug about it. Many of those critics don't like LLMs for legitimate reasons regarding their effect on employment, the environment, ai slop, art, etc. But, these valid issues are biases unrelated to ai 'consciousness.' If a lay person comes in with an unbiased (not good, just unbiased) perspective, they just see a very difficult to understand, literate computer program which seems to have destroyed the turring test. And, they get insulted by people for making a naive, but somewhat reasonable assumption that it is conscious.
The problem with the entire conversation is that no one knows what consciousness really is and how it arises in humans.
If we had a slightly dumb consciousness produce text on prompt, how would it look different from what we have now?
What the current generation of LLMs lack, in my opinion are: an ability for metacognition (knowing what they don't know), internal motivation, continuity of experience and agency.
Some of those will be difficult to solve, but I don't think it's impossible that this technology would yield truly thinking machines.
A lot of ten-year-olds wouldn't pass if they were subjected to blind turing test against a modern LLM. Heck, even three-year-olds are cpncious and they cannot obviously pass the turing test as it is defined.