this post was submitted on 15 Mar 2026
26 points (100.0% liked)

Asklemmy

53554 readers
612 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

Newcomb's problem is a thought experiment where you're presented with two boxes, and the option to take one or both. One box is transparent and always contains $1000. The second is a mystery box.

Before making the choice, a supercomputer (or team of psychologists, etc) predicted whether you would take one box or both. If it predicted you would take both, the mystery box is empty. If it predicted you'd take just the mystery box, then it contains $1,000,000. The predictor rarely makes mistakes.

This problem tends to split people 50-50 with each side thinking the answer is obvious.

An argument for two-boxing is that, once the prediction has been made, your choice no longer influences the outcome. The mystery box already has whatever it has, so there's no reason to leave the $1000 sitting there.

An argument for one-boxing is that, statistically, one-boxers tend to walk away with more money than two-boxers. It's unlikely that the computer guessed wrong, so rather than hoping that you can be the rare case where it did, you should assume that whatever you choose is what it predicted.

you are viewing a single comment's thread
view the rest of the comments
[–] davel@lemmy.ml 3 points 19 hours ago (1 children)

Mmmm, this sounds like an idealist hypothetical problem that in reality can’t exist, so to engage with it is to engage with nonsense.

The predictor rarely makes mistakes because… just because. It’s axiomatic. The predictor runs on the magic of unsupported assertion.

[–] Objection@lemmy.ml 2 points 18 hours ago (1 children)

Some version of it could exist. Not with the big numbers and not with the high degree of certainty in the problem, but you could have, say, somebody who's on average 70% accurate at reading people and the boxes are $1 and $10.

It is somewhat idealist in that it's a contrived scenario, but it's really just idle curiosity on my part. Maybe it could reflect something about people's thought processes, or maybe it's just people interpreting the question differently.

[–] davel@lemmy.ml 3 points 18 hours ago (1 children)

Even if it were to exist in the short run, it wouldn’t be stable. The predictor must be predicting somehow, which eventually could be at least partially sussed out, and future decisions would change as a result. Unless the predictor runs on literal magic, it would eventually no longer fit its own definition.

[–] Arrkk@lemmy.world 1 points 6 hours ago

You can flip the problem around and have it be mathematically the same. The predictor has some knowable accuracy, you can run the experiment many times to determine what it is. Let's also replace the predictor with an Oracle, guaranteed 100% always correct, and we'll manually impose some error by doing the opposite of its prediction with some probability. This is fully indistinguishable from our original predictor.

Now, instead of the predictor making a prediction, let's choose our box first, then decide what to put in the mystery box afterwards, with some probability of being "wrong" (not putting the money in for the 1 box taker, or putting the money in for the 2 box taker). This is identical to having an Oracle, we know exactly what boxes will be taken, but there is some error in the system.

Now we ask, should you take one box or two? Obviously it depends on what the probability is. There's no more "fooling" the predictor. So, you do the EV calculation and find that if the probability is more than 50% accurate (in other words, if the probability of error is less than 50%), you should always take 1 box