I think a huge factor in the hype is some I've seen multiple times in recent years. It's the assumption that improvement will be linear. I've seen it first in Musk's hyperloop. People, even engineers who should know better, tried to convince the public that scaling up a proof of concept is possible despite unsolved physical constraints. SpaceX is similar. Being able to land rockets, Musk somehow convinced a lot of people that we are going to land on Mars soon.
And AI is no different. It already ran into real limits in terms of what it is capable of being a stateless processor. Prompt processing and iteration can only gloss over so much, but things like "forget the previous versions of this code" are inherently impossible by design, it can only be appended as a rule, but the old versions will remain in the prompt blob. And that is very very likely never going to change
It's like saying "well, we are getting better and better at jumping, it's only a matter of time until we'll be able to fly"