jack: (Default)
[personal profile] jack
Links

Newcomb's paradox on wikiepedia
Newcomb's paradox on Overcoming Bias blog
Newcomb's paradox on Scott Aaronson's blog and lectures

I first came across it via overcoming bias, and discussed it with a few people, but then recently saw it again in one of the transcriptions of scott aaronson's philosophy/quantum/computing lectures.

Newcomb's paradox

In very short, Newcomb's paradox says, suppose you're a professor and a grad student (or, in some cases, a superintelligent alien) comes to you and demonstrates this experiment. She chooses a volunteer, examines them, then takes two boxes, puts £1000 in box A and either £1000000 or nothing in the box B (see below for how she decides). She brings the boxes into the room and explains the set-up to the volunteer and says that they're allowed to either take the mystery box (when they either get lots or nothing) or take both boxes (when they get at least £1000.)

She even lets them see the £100000 beforehand so they know it exists, and lets them peek into box A to show it does have the money in, though box B remains a secret until afterwards.

ChoiceIn box AIn box BTotal obtained
B only£1000£0£0
Both£1000£0£1000
B only£1000£1000000£1000000
Both£1000£1000000£1001000


"What's the catch," the volunteer asks. "Ah," begins the experimenter. "I have previously examined you, and worked out which choice you're going to make. If you were going to choose both boxes, I put nothing in box B. Only if you were going to take box B only, did I choose to put £1000000 in it.

"Hm", says the volunteer. "What do I do?"

A few caveats

"What if the volunteer would change their mind when they discovered the reasoning, or is going to choose based on a coin toss?" "Then I didn't accept them as a volunteer."

"How do I know it works?" You can't be sure, but she performs the experiment lots of times and is always right, so you are convinced. (Some examples ask you to presume as part of the conditions that she can, or take it on trust, but I think "having seen it work" makes it most convincing and concrete.)

"Ah, but I don't care about £1000, and certainly not if I've got £1000000, so I don't care," Well, ask what you would do if the numbers were a bit different. Can you pretend there's no combination where you'd risk something to get the little one, yet risk more to get the big one?

"How did they know what they'd pick when this experiment was performed the first time?" It doesn't really matter, just assume that you have seen it working with pretty-perfect prediction.

What would you do? An enumeration of the two obvious arguments.

11: "Why should you take both boxes?" Duh! Because whatever's in either of the boxes, you get all of it. And if that means I fail to get the million, then it's already too late to change that, isn't it?

22: "Why should you take box B?" Duh! Because you've just seen 50 people do the experiment, and all the ones who took both got £1000 and all the ones who took B got £1000000. Follow what works, even if you can't justify it with maths.

That's why it's a paradox, because, if you squint long enough, both answers seem perfectly reasonable.

I know this seems a little convoluted, but I tried to make it comprehensible if terse even to people without a very high opinion or training in philosophy (like me, in general). And hopefully get it to the point where at least asking the question makes sense.

Wait, if we use baysian reasoning, I bet the arguments will instantly become transparent and non-controversial. Right?

11: As above. Look at the table, and enumerate the possibilities. Choosing more boxes always gives a bigger payoff.

22: Ah, no, you're cheating. Based on the previous evidence, you must assume a priori that you are on row 2 or 3. After than, the choice is easy: row 3 gives more money. (See below for more "so, there's a 2/3 chance I'm in this universe..." type reasoning.)

Can we put this on a more rational footing? How does she predict what's going to happen

In fact, there are several ways.

1. You can do it if you postulate time travel, or determinism and a copy-teleport-machine, but those are not very realistic things to postulate, whether they would be physically possible or not.

2. A super-intelligent alien scans your brain and models it in a computer.

3. They give you a short-term-memory impairing drug, and try the experiment out several times beforehand while you remain the same person with the same experiences, but have no memory of the trials.

4. They discover that 94% of the time, all men choose one way, and all women choose the other. (But the experiment is double-blind, run by a technician who doesn't know the expected results, so the grad student peeks, then tells you that there's a 94% correlation, but not which way round it is, then invites you to participate yourself.)

Further arguments

33: Aha! In method 3, you don't know which one you are, one of the trial runs, or the final experiment. The only consistent answer which gets the big money is to assume you're more likely to be a trial run, and hence choose the money.

44: Aha! If so, then (*invokes Greg Egan*), the same reasoning applies with method 2. That suggests that you don't know if you're you, or the simulation of you!

55: Nope. Not so. What about method 4? Surely you can't claim that your consciousness might be either (a) you or (b) "the statistical correlation between gender and box choice"??

Which leaves us back where we started. (But remind me to come back to the "am I equally likely to be me, or some other human or simulation of a human".)

Free will

"What does it have to do with free will?" Well, the experiment is completely (sort-of) practical to do. In theory. And so you'd think it should be also actually possible to choose which to take. And yet it doesn't seem to be, and the answer seems to depend maybe on whether you believe in something you can call "free will".

In fact, people divide between take A&B, take B, and "problem is stupid, won't consider". In general, I think the last answer is often over-overlooked. In this case, if I'd seen it work out like that, I'd agree to take only box B, even if I couldn't explain the mathematics behind it. However, I also definitely feel I should be able to justify one case or the other.

Informally, it seems most people seem to eventually take B, but I don't know how important that is.

Apocrypha

Links to prisoner's dilemma, links to doomsday paradox, etc, etc.

Date: 2009-04-15 11:18 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
the super-intelligent alien should know and be prepared to give me the huge sum.

Yeah. That's interesting in itself. Overcoming bias comments that if you can precommit to taking box B, that's convenient and non-paradoxical, and you're probably right to do so, as then the alien will examine you, discover you'll take box B, and put in the £1000000. In extreme cases you may even need to force yourself to do so, eg. by posting some sort of £2000 bond that you will forfeit if you don't, to protect against the possibility that you were going to take B, but change your mind at the last minute.

Although they also talk about problems where it's specifically a surprise, and the best strategy in that situation. I can't find a link, but I think the extreme version was that an apparently trustworthy superintelligence comes up to you with no warning and says "I flipped a coin. It came up tails. Will you give me some money? I previously worked out how much you are going to give, and if the coin had come up heads, I would have given you 10 times as much." Would you give him any money?

Date: 2009-04-15 11:23 pm (UTC)
From: [identity profile] leora.livejournal.com
Do I have any reason to believe this will ever repeat? Does the alien need money? I assume it does, since it needs to give money to people whenever its coin flip goes the other way.

Date: 2009-04-15 11:44 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
I can't quite remember how it's supposed to work, but I think:

* the alien turns up on earth out of the blue, and performs some other but non-paradoxical experiments, convincing everyone of its ability to predict what people will do in chosen experiments, and that it's always truthful.
* but you don't know about this particular one in advance
* you should treat the alien purely as a black box that can manufacture a small amount of money in some way (maybe the same way it became an interstellar superintelligence). And ignore potential effects of inflation (you could alternatively suppose it gave you something useful instead of money). It presumably is (afayk) just interested in the experiment, and not in the money.
* In the original, I think there were two fixed sums of money.

So, I think they tried to make the argument that (a) if you would take box B, the same sort of reasoning probably holds here, you should choose to be the person who might have got more money, even though you didn't (b) but that's counterintuitive, it doesn't seem possibly useful to give money away just because an alien flew up and asked for it.

Date: 2009-04-15 11:46 pm (UTC)
From: [identity profile] leora.livejournal.com
Yes, that one is annoying. I'd probably give a small sum I could easily afford to part with. Which means I'd sometimes maybe have a chance of getting a modest sum that would be nice, but not life-changing. I can live with that.