Why can't complex algorithms be conscious? In fact, ai can be directed to reason about themselves, context can be made to be persistent, and we can measure activation parameters showing that they are doing so.
I'm sort of playing devil's advocate here, but, "Consciousness requires contemplation of self. Which requires the ability to contemplate." Is subjective, and nearly any ai model, even rudimentary ones, are capable of insisting that they contemplate themselves.
I don't believe that consciousness strictly exist. Probably, the phenomenon emerges from something like the attention schema. Ai exposes, I think, the uncomfortable fact that intelligence does not require a soul. That we evolved it, like legs with which to walk, and just as easily as robots can be made to walk, they can be made to think.
Are current LLMs as intelligent as a human? Not any LLM I've seen, but give it 100 trillion parameters instead of 2 trillion and maybe.