this post was submitted on 17 Jan 2026
265 points (96.2% liked)

Microblog Memes

10135 readers
1604 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] callyral@pawb.social 1 points 1 day ago* (last edited 1 day ago)

How would that work? What if I gain access to the AI and predict my own choices? Would the AI be able to predict that I am using it, and somehow come to a conclusion even though its conclusions would change my behavior?

Let's say the AI says that I'll do thing A, and then I see that and choose to do thing B, the AI is wrong.

But if AI had predicted thing B, I, the smartass, would've chosen to do thing A, the opposite, so the AI is wrong.

How intelligent would it need to be to realize that my behavior depends on its output, and that it could control me with its predictions? Maybe the AI predicts that I'll use it, so it deliberately shifts its predictions in a way to make me act in its favor somehow...

Is there a name for this kind of paradox? Can a machine predict itself?

This is the issue I have with machines that predict the universe, because if the machine itself influences the universe (even if in a relatively small way), the machine would have to replicate itself in its simulation, which would be a problem as the simulated machine would also have to predict itself, etc, etc... this seems like it'd require infinite computing power. So by extension, if the super-intelligence wants to predict my actions, but I have access to the machine, then the machine would need to predict itself.