Last week's paradox poll is a well established if not well known philosophical paradox called Newcomb's paradox. It's the self-referential paradox as it pertains to freewill, and supposedly it divides people evenly, with half of people thinking the smart decision is to pick both boxes and the other half thinking the smart decision is to pick box B. Three people commented on the post last week, with the result that one person chose both boxes and two people chose to open only box B. I also asked two software developers at work, and they each decided to open both boxes. Thus, so far I've seen a 3–2 split.
Each decision has a good argument in its favor. The argument to open both boxes goes something like this:
The Predictor has already made his prediction, and thus the content of Box B is already established—it can't be changed by your choice. Given that Box A contains $1,000 no matter what, you're really choosing whether you want an extra thousand dollars by choosing to open both boxes instead of Box B alone. Meanwhile, we can hope that the Predictor predicted you would open only Box B, in which case you'll also get $1,000,000, too, but you have no effect on that outcome.
Whereas, the argument for opening only box B goes something like this:
The Predictor is usually correct, so it's a mistake to give much, if any, weight to the outcomes where the Predictor is wrong. Thus, you probably won't end up with either $0 (you choose Box B alone but the Predictor predicts you would open both boxes) or $1,001,000 (you choose both boxes but the Predictor predicts Box B alone). Instead, you're really choosing between $1,000 (both boxes) and $1,000,000 (Box B only), so you should choose the bigger amount, which means choosing to open Box B only.
The crux of which argument seems right to you depends on how free you think your choice is. The both-boxes argument assumes a very free choice, whereby the Predictor is at best making a coin-toss guess because the Predictor can't foresee your choice. Whereas, the Box B argument assumes that your choice is causally linked with the Predictor's choice, and thus your choice is the effect of well foreseen causes. As for how it's possible for your choice to be causally linked as such, the fashionable answer today is: deterministic brain chemistry. Then assume the Predictor possesses a superbly accurate brain scanner.
I'm a Box B guy. Though I think of myself as being good at understanding other people's arguments, I can't understand how the both-boxes people discredit the prior evidence of the Predictor being correct so many times. Such evidence precludes the possibility of my choice being free. But some people's belief in freewill is so strong that even hypothetical counter-evidence doesn't change their minds. —The Predictor has been right 98% of the time? No big deal! He's just been lucky, that's all— All this reminds me of the saying, I'll believe it when I see it,
which has got it backwards. For most of us, we see it when we believe it.
2 comments:
The game show Deal or No Deal used this thought model. The banker placed a set value to buy the rest of the money containing cases.
Contestants turned down $60,000 for a 20% at $200,000. There must be a sliding scale of consequence tolerence. I am fine missing $1000 for a 50/50 shot at a million. Yet, will keep the 60K at 20%.
How about a post that explores that deeper?
Anonymous— Interesting. I'll dedicate the next post to figuring out where your price is.
Post a Comment