this post was submitted on 04 Mar 2025
26 points (88.2% liked)

ChatGPT

9313 readers
2 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 2 years ago
MODERATORS
 

Been liking Alex O'Connor's ChatGPT explanation videos and ChatGPT related experiments.

Alex O'Conner makes content related to philosophy and religion but I particularly enjoyed, in addition to this video, one where he gaslights ChatGPT using moral dilemmas.

In this video he tells you the reason why it is so hard to get ChatGPT to do this. Short Answer: most images you find of wine are either empty glasses or partially full because who fills their wine to the top?

you are viewing a single comment's thread
view the rest of the comments
[–] billwashere@lemmy.world 3 points 1 week ago (2 children)

And this is a prime example of why these trained models will never be AGIs. It only knows what it’s been trained on and can’t make inferences or extrapolations. It’s not really generating an image’s much as really quickly photoshopping and merging images it already knows about.

[–] kakihara123@lemmy.world 2 points 2 days ago (1 children)

How sure are you that human brains don't work in a similar way? Can you create something that you have never seen before without using known images you have in your head to create the result?

[–] billwashere@lemmy.world 1 points 2 days ago

Well from a cognitive standpoint you’re absolutely correct, we aren’t a 100% sure how intelligence works or even how to properly define it. But I can absolutely think up things I’ve never seen before. And it’s easy to see how that’s possible. Look at any fantasy or science fiction art. That was all created without ever seeing it before because it doesn’t exist. In my opinion current AI completely lacks imagination. It only knows what it’s been trained on. And since people are pretty good with imagining things, we’ve created lots of art that doesn’t exist in real life and now the AI has been trained on this extensive art and is now pretty good at faking imagination.

I am by now means a cognitive specialist so I could be completely wrong.

[–] HoneyMustardGas@lemmy.world 1 points 1 week ago

It's just patterns of pixels. It recognizes an apple as just a bunch of reddish pixels etc, then when given an image of a similar colored red ball, or something, it is corrected until it ceases to recognize something not an apple as an apple. It really does not know what an apple looks like to begin with. It's like declaring a variable. The computer does not know what the variable really means just what to equate it to.