279
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Dec 2023
279 points (100.0% liked)
196
16511 readers
2289 users here now
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
founded 1 year ago
MODERATORS
I appreciate what you are saying, and I don't really disagree, but... as you have identified, these are technical challenges: how many extra checks? As many as are needed. Consider the absolutely absurd amount of computation involved in generating a single token - what's a little more?
My point was that this might be closer than LLM naysayers think: as the critical limitations of current models are resolved, as we discover sustainable strategies for context persistence and feedback, the emergence of new capabilities is inevitable. Are there limitations inherent to our current approach? Almost certainly, but we already know that the possible risks involved in overcoming them won't slow us down.
I'm not really talking about technical limitations, so I don't know that there is a disagreement here at all. The solution could be 5 years away, or 50, who knows.
I'm more pointing out that regardless of the exact techniques used, context is key to creating things that make sense, rather than things that are just shallow mimicry. I think that barrier cannot be breached without creating an actual intelligence, because we are fundamentally talking about meaning.
And I agree these ethical considerations won't slow people down. That's what I'm concerned about. People will be so focussed on making better tools that they will be very keen to overlook the fact that they're creating personalities purely to enslave them.
Even in the case of ostensibly fundamental obstacles, the moment we can effortlessly brute force our way around, they become effectively irrelevant. In other words, if consciousness does emerge from complexity, then it is perfectly sensible to view the shortcomings you mention as technical in nature.
I am only talking about those limitations inasmuch as they interact with philosophy and ethics.
I don't know what your point is. ML models can become conscious given enough complexity? Sure. That's the premise of what I'm saying. I'm talking about what that means.
Solid edit. If I found myself confused about the context of the discussion, I wouldn't try to resolve it with "I don't know what your point is".
Okay. If you want to clarify, you're free to. If you don't, then dont.