Continuity of existence
May. 10th, 2011 10:28 pmA question oft posed in science fiction and amateur philosophy is what constitutes continuity of my existence. That is, when I'm saying "I want to do blah to achieve blah" what counts as "I" for this purpose?
A typical science-fictional spectrum of options is something like:
1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].
Greg Egan and Schlock Mercenary provide decent examples of several.
Now, there are fairly good reasons for saying #1 clearly does[2] and #6 clearly doesn't. So though each consecutive pair look as if they must have the same answer, there must be a break somewhere in the middle. I don't expect this intuition to change the answer to #1 or #6.
However, nor do I think it's worthless. I think asking "Does a shrimp have the property of floogle" is logically worthless (except insofar as it sheds light on how we ask questions) since it inherently doesn't have a meaning. But I think asking "If we meet aliens, should they have human rights" is relevant because even though we don't expect to meet any aliens in our lifetimes, I think our concept of human rights ought to extend enough to be able to give an answer ("yes", "no", or "under these circumstances..." or "your question is malformed because..."). It'd be bad to spend ALL your time on those questions, but I do think it's reasonable to think about them.
I expect some people to dismiss the whole question, since they have a convincing argument for #1, or against #6, and think that settles the matter[3]. If the only interest in the link between them was the answer, I'd concede that the question is not very useful. But I think pinpointing where the flaw is, if anywhere, is useful in understanding the concepts.
To me, the break seems to come between #5 and #6. Greg Egan expounded something like #6 in Permutation City, and while I couldn't refute it, I was sure it didn't count. Whereas #5, while practically ridiculous, does seem conceptually possible.
However, I think there's an axis oft glossed over in this discussion, which is how many copies of you there are. If there's just one (as when you awake from sleep), it's reasonable to to think of that as you. But when this appears in fiction, most stories have a conveniently simplifying premise that ensures there is only one, or in rare some cases, two "copies". Those examining cases with more, even tangentially, are rare.
I think this is one of the hidden flaws in #6. It's the same problem as having many parallel worlds. If there's one or two parallel worlds, we can treat the people there as people. If there's ALL POSSIBLE parallel worlds, then all of our morality is a big division by zero, since whatever we do, there'll be some parallel world where the outcome we hoped for, and the outcome we feared, happened. So we need a new metric for "how does what we do matter?" Possibly one based on honour and righteousness, rather than utilitarianiasm, would give answers (whether we liked them or not...)
But even #5 and #4 suffer from the same problem. If there can be multiple copies, do I just arbitrarily designate one of them as "me"? Or do something else? Or give up on treating myself differently to other people in moral systems?
[1] This is probably certain and certainly probable.
[2] You can argue otherwise, but I submit most people agree we should at least act as if we have a vested interest in our three-second ahead selves, which is what I mean.
[3] Indeed, I feel very defensive about mentioning it at all, since it's easy to assert that someone teleported or uploaded is "obviously" different, and hence that the original person simply committed suicide. I agree that argument is apparently convincing. But to me, so is the argument that it's practically the same as the previous case on the list. So if you only know one of those arguments, you can feel you know "the" answer. But because I'm thinking of both, I'm stuck in uncertainty.
A typical science-fictional spectrum of options is something like:
1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].
Greg Egan and Schlock Mercenary provide decent examples of several.
Now, there are fairly good reasons for saying #1 clearly does[2] and #6 clearly doesn't. So though each consecutive pair look as if they must have the same answer, there must be a break somewhere in the middle. I don't expect this intuition to change the answer to #1 or #6.
However, nor do I think it's worthless. I think asking "Does a shrimp have the property of floogle" is logically worthless (except insofar as it sheds light on how we ask questions) since it inherently doesn't have a meaning. But I think asking "If we meet aliens, should they have human rights" is relevant because even though we don't expect to meet any aliens in our lifetimes, I think our concept of human rights ought to extend enough to be able to give an answer ("yes", "no", or "under these circumstances..." or "your question is malformed because..."). It'd be bad to spend ALL your time on those questions, but I do think it's reasonable to think about them.
I expect some people to dismiss the whole question, since they have a convincing argument for #1, or against #6, and think that settles the matter[3]. If the only interest in the link between them was the answer, I'd concede that the question is not very useful. But I think pinpointing where the flaw is, if anywhere, is useful in understanding the concepts.
To me, the break seems to come between #5 and #6. Greg Egan expounded something like #6 in Permutation City, and while I couldn't refute it, I was sure it didn't count. Whereas #5, while practically ridiculous, does seem conceptually possible.
However, I think there's an axis oft glossed over in this discussion, which is how many copies of you there are. If there's just one (as when you awake from sleep), it's reasonable to to think of that as you. But when this appears in fiction, most stories have a conveniently simplifying premise that ensures there is only one, or in rare some cases, two "copies". Those examining cases with more, even tangentially, are rare.
I think this is one of the hidden flaws in #6. It's the same problem as having many parallel worlds. If there's one or two parallel worlds, we can treat the people there as people. If there's ALL POSSIBLE parallel worlds, then all of our morality is a big division by zero, since whatever we do, there'll be some parallel world where the outcome we hoped for, and the outcome we feared, happened. So we need a new metric for "how does what we do matter?" Possibly one based on honour and righteousness, rather than utilitarianiasm, would give answers (whether we liked them or not...)
But even #5 and #4 suffer from the same problem. If there can be multiple copies, do I just arbitrarily designate one of them as "me"? Or do something else? Or give up on treating myself differently to other people in moral systems?
[1] This is probably certain and certainly probable.
[2] You can argue otherwise, but I submit most people agree we should at least act as if we have a vested interest in our three-second ahead selves, which is what I mean.
[3] Indeed, I feel very defensive about mentioning it at all, since it's easy to assert that someone teleported or uploaded is "obviously" different, and hence that the original person simply committed suicide. I agree that argument is apparently convincing. But to me, so is the argument that it's practically the same as the previous case on the list. So if you only know one of those arguments, you can feel you know "the" answer. But because I'm thinking of both, I'm stuck in uncertainty.
no subject
Date: 2011-05-11 12:47 pm (UTC)Yes, exactly that. Someone's link to lesswrong on LJ touched on it too, and it was mentioned in the Prestige: it's not clear exactly how to think of it, but thinking of it like a 50/50 chance is better than most ways, even though that seems wrong. (Although, considering Peter's point, if you consider the "non"-copied you to be "the" you, maybe you should just expect to be him, and let the other cope as best he can...)
no subject
Date: 2011-05-11 12:57 pm (UTC)I definitely disagree with that! That way lies callous self-copying: if you act in the assumption that you're never going to find 'you' were the copy, and that whatever happens to the copy is Somebody Else's Problem, you'll be that much more willing in the first place to create copies to do unpleasant jobs you don't want to do yourself, and that doesn't seem like a good direction to be heading in.
(Practically in the objective sense, as well as morally and practically in the subjective-self-interest sense. If you spawn a new you to do the tedious chores and do so confidently expecting not to 'become' the copy, then the copy's first reaction will be to be extremely unpleasantly surprised, which doesn't bode well for its subsequent cooperation with your original plan!)
no subject
Date: 2011-05-11 02:55 pm (UTC)And from that, you get callously-copied people who escape into society at large, who write angsty autobiographies on DW, who go on copied-people rights marches, you end up with huge amounts of infighting between copied-people who are anti-copying and copied-people who self-copy in a copy-positive sort of way, you get kids thinking it would be terribly romantic to be a copied person and then end up disappointed when they turn out to become the original, you get people in the software and photocopying industries who find the terminology they have been using for decades is suddenly un-PC, it would just be a mess.
no subject
Date: 2011-05-11 09:46 pm (UTC)I agree in general it's probably not wise (or ethical?), but it reminds me of thoughts I had on living in a virtual world, that you probably _could_ use yourself as your own spam filter. If you fork yourself a few hundred times, leaving out the short term memory and anything that will actually change the simulation, but allowing each copy to process one email each, and then marking the ones where the copy found it worth noticing, and then trying to merge back together the copies who thought something worthwhile. In some sense, 9/10 of you are committing suicide[1] in favour of the rest of you, but also, it seems reasonable...
[1] For the sake of the plot, I imagined it would be possible but rare for one to revolt, but you'd probably just merge him back in, or give him his own processing slice to live in. But I admit that was purely a plot-driven assumption.