(no subject)
Aug. 9th, 2007 02:24 pmPtc reminded us of the pirate loot-dividing puzzle.
(If you haven't heard it, five perfectly rational pirates want to divide some loot. The method is that the most senior pirate proposes a division, and the others vote on it. If a non-strict majority accept it, it stands, else he's thrown overboard and the others repeat the process. The pirates are assumed to care about, in order: surviving, gold, killing other pirates, and nothing else.
Read wikipedia for a filler description, and spoiler for the normal puzzle. If you haven't heard it, it's quite cute, so you probably want to read that rather than reading on here.)
Ian Stewart takes the calculation further, up past 200 pirates when the gold starts to run out, and finds amusing results: http://euclid.trentu.ca/math/bz/pirates_gold.pdf
But it occurred to me the standard solution possibly has another loophole.
As is traditional, game theory problems can improved with probabilities. In the standard solution, A offers C and E one gold piece each. But the logic works the same if he offers them a one-in-a-thousand chance of a gold piece. If they're trying to maximise their *certain* gold pieces, the original solution applies. But on the other hand, if you're trying to maximise gold pieces it's hard to say 1/1000 of a chance is better than none, which is what they'll certainly get if they try B's mercy :)
(If you haven't heard it, five perfectly rational pirates want to divide some loot. The method is that the most senior pirate proposes a division, and the others vote on it. If a non-strict majority accept it, it stands, else he's thrown overboard and the others repeat the process. The pirates are assumed to care about, in order: surviving, gold, killing other pirates, and nothing else.
Read wikipedia for a filler description, and spoiler for the normal puzzle. If you haven't heard it, it's quite cute, so you probably want to read that rather than reading on here.)
Ian Stewart takes the calculation further, up past 200 pirates when the gold starts to run out, and finds amusing results: http://euclid.trentu.ca/math/bz/pirates_gold.pdf
But it occurred to me the standard solution possibly has another loophole.
As is traditional, game theory problems can improved with probabilities. In the standard solution, A offers C and E one gold piece each. But the logic works the same if he offers them a one-in-a-thousand chance of a gold piece. If they're trying to maximise their *certain* gold pieces, the original solution applies. But on the other hand, if you're trying to maximise gold pieces it's hard to say 1/1000 of a chance is better than none, which is what they'll certainly get if they try B's mercy :)
no subject
Date: 2007-08-09 02:16 pm (UTC)I always suspected that it would be forced to become probabilistic in the end anyway. In the above, #6 might get away with bribing either of #2 or #3, on the grounds that the one so bribed might consider a certain gp from #6 to beat a half-chance of a gp from #4. But then, they might not, and fortunately #6 doesn't have to stake his life on it, because #1 and #4 will definitely get nothing if they vote against #6 so they're better bets. But it's not at all clear to me that in all situations every pirate will always have enough people to bribe without relying on probability. Can #7 and above, for example, rely on #6 choosing to bribe one of #1 and #4?
You could eliminate the uncertainty by adding the rule that a pirate would always prefer to bribe a higher-numbered one given a choice, in which case #4 would bribe #3, #6 would bribe #4 and the game would stay deterministic forever. (Or they could always prefer to bribe lower-numbered pirates, of course, in which case #4 bribes #2 and #6 bribes #1.)