Continuity of existence
May. 10th, 2011 10:28 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A question oft posed in science fiction and amateur philosophy is what constitutes continuity of my existence. That is, when I'm saying "I want to do blah to achieve blah" what counts as "I" for this purpose?
A typical science-fictional spectrum of options is something like:
1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].
Greg Egan and Schlock Mercenary provide decent examples of several.
Now, there are fairly good reasons for saying #1 clearly does[2] and #6 clearly doesn't. So though each consecutive pair look as if they must have the same answer, there must be a break somewhere in the middle. I don't expect this intuition to change the answer to #1 or #6.
However, nor do I think it's worthless. I think asking "Does a shrimp have the property of floogle" is logically worthless (except insofar as it sheds light on how we ask questions) since it inherently doesn't have a meaning. But I think asking "If we meet aliens, should they have human rights" is relevant because even though we don't expect to meet any aliens in our lifetimes, I think our concept of human rights ought to extend enough to be able to give an answer ("yes", "no", or "under these circumstances..." or "your question is malformed because..."). It'd be bad to spend ALL your time on those questions, but I do think it's reasonable to think about them.
I expect some people to dismiss the whole question, since they have a convincing argument for #1, or against #6, and think that settles the matter[3]. If the only interest in the link between them was the answer, I'd concede that the question is not very useful. But I think pinpointing where the flaw is, if anywhere, is useful in understanding the concepts.
To me, the break seems to come between #5 and #6. Greg Egan expounded something like #6 in Permutation City, and while I couldn't refute it, I was sure it didn't count. Whereas #5, while practically ridiculous, does seem conceptually possible.
However, I think there's an axis oft glossed over in this discussion, which is how many copies of you there are. If there's just one (as when you awake from sleep), it's reasonable to to think of that as you. But when this appears in fiction, most stories have a conveniently simplifying premise that ensures there is only one, or in rare some cases, two "copies". Those examining cases with more, even tangentially, are rare.
I think this is one of the hidden flaws in #6. It's the same problem as having many parallel worlds. If there's one or two parallel worlds, we can treat the people there as people. If there's ALL POSSIBLE parallel worlds, then all of our morality is a big division by zero, since whatever we do, there'll be some parallel world where the outcome we hoped for, and the outcome we feared, happened. So we need a new metric for "how does what we do matter?" Possibly one based on honour and righteousness, rather than utilitarianiasm, would give answers (whether we liked them or not...)
But even #5 and #4 suffer from the same problem. If there can be multiple copies, do I just arbitrarily designate one of them as "me"? Or do something else? Or give up on treating myself differently to other people in moral systems?
[1] This is probably certain and certainly probable.
[2] You can argue otherwise, but I submit most people agree we should at least act as if we have a vested interest in our three-second ahead selves, which is what I mean.
[3] Indeed, I feel very defensive about mentioning it at all, since it's easy to assert that someone teleported or uploaded is "obviously" different, and hence that the original person simply committed suicide. I agree that argument is apparently convincing. But to me, so is the argument that it's practically the same as the previous case on the list. So if you only know one of those arguments, you can feel you know "the" answer. But because I'm thinking of both, I'm stuck in uncertainty.
A typical science-fictional spectrum of options is something like:
1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].
Greg Egan and Schlock Mercenary provide decent examples of several.
Now, there are fairly good reasons for saying #1 clearly does[2] and #6 clearly doesn't. So though each consecutive pair look as if they must have the same answer, there must be a break somewhere in the middle. I don't expect this intuition to change the answer to #1 or #6.
However, nor do I think it's worthless. I think asking "Does a shrimp have the property of floogle" is logically worthless (except insofar as it sheds light on how we ask questions) since it inherently doesn't have a meaning. But I think asking "If we meet aliens, should they have human rights" is relevant because even though we don't expect to meet any aliens in our lifetimes, I think our concept of human rights ought to extend enough to be able to give an answer ("yes", "no", or "under these circumstances..." or "your question is malformed because..."). It'd be bad to spend ALL your time on those questions, but I do think it's reasonable to think about them.
I expect some people to dismiss the whole question, since they have a convincing argument for #1, or against #6, and think that settles the matter[3]. If the only interest in the link between them was the answer, I'd concede that the question is not very useful. But I think pinpointing where the flaw is, if anywhere, is useful in understanding the concepts.
To me, the break seems to come between #5 and #6. Greg Egan expounded something like #6 in Permutation City, and while I couldn't refute it, I was sure it didn't count. Whereas #5, while practically ridiculous, does seem conceptually possible.
However, I think there's an axis oft glossed over in this discussion, which is how many copies of you there are. If there's just one (as when you awake from sleep), it's reasonable to to think of that as you. But when this appears in fiction, most stories have a conveniently simplifying premise that ensures there is only one, or in rare some cases, two "copies". Those examining cases with more, even tangentially, are rare.
I think this is one of the hidden flaws in #6. It's the same problem as having many parallel worlds. If there's one or two parallel worlds, we can treat the people there as people. If there's ALL POSSIBLE parallel worlds, then all of our morality is a big division by zero, since whatever we do, there'll be some parallel world where the outcome we hoped for, and the outcome we feared, happened. So we need a new metric for "how does what we do matter?" Possibly one based on honour and righteousness, rather than utilitarianiasm, would give answers (whether we liked them or not...)
But even #5 and #4 suffer from the same problem. If there can be multiple copies, do I just arbitrarily designate one of them as "me"? Or do something else? Or give up on treating myself differently to other people in moral systems?
[1] This is probably certain and certainly probable.
[2] You can argue otherwise, but I submit most people agree we should at least act as if we have a vested interest in our three-second ahead selves, which is what I mean.
[3] Indeed, I feel very defensive about mentioning it at all, since it's easy to assert that someone teleported or uploaded is "obviously" different, and hence that the original person simply committed suicide. I agree that argument is apparently convincing. But to me, so is the argument that it's practically the same as the previous case on the list. So if you only know one of those arguments, you can feel you know "the" answer. But because I'm thinking of both, I'm stuck in uncertainty.
no subject
Date: 2011-05-11 07:01 am (UTC)An interesting thing to do at this point would be to read the literature on "hyperbolic discounting", starting with wikipedia and then doing a pubmed search. In particular, with regards to self-control, addiction, impulsivity, cutting up your credit card and throwing out the alcohol/snacks, one-drink-is-a-thousand-drinks etc. Basically all this is about your relationship with future selves on different timescales. In particular hyperbolic discounting can give an explanation of why a current-self might want to sabotage a medium-term-future-self for the good of a long-term-future-self, for fear of what said medium-term-future-self might do on behalf of his short-term-future-self when his time comes.
Basically I think the conclusion is that people behave as if "is the same person as" is not a binary-valued relation, and isn't necessarily transitive either.
no subject
Date: 2011-05-11 08:24 am (UTC)Oddly, the thing I found most implausible about Permutation City was why it hadn't gone to step #7: "successive states of a simulation of your mind exist somewhere in the expansion of pi, whether you've found them or not". It seemed disturbingly anthropocentric to say that it made a difference that somebody had computed the starting state of their new universe in deciding whether it subsequently "found itself in the dust". Surely with universes, the important thing is simply existence? If there are patterns in the dust that add up to that universe, they're there whether or not anyone looks at them.
(Of course, PC had to have that property because it became critical to the plot at the end, in that the privileged role of conscious observation was also the force driving the weird stuff that happened late in the book. But even so, that was the bit I found most unsatisfying about the story; I liked PC mostly for its setup, and could have lived without the ending.)
But even #5 and #4 suffer from the same problem. If there can be multiple copies, do I just arbitrarily designate one of them as "me"? Or do something else? Or give up on treating myself differently to other people in moral systems?
My feeling on this has always been that the concept of "me", and indeed of identity in general, is like those mathematical concepts that are only well defined because of some theorem implying they're single-valued. For instance, we can only talk about the integers mod n because of the theorem that says that the residues of ab and a+b mod n depend only on the residues of a and b; if that were not the case, the concept would be completely incoherent. So, similarly, that concept of "identity" we have in our current society is based on the premise that lots of more basic properties (e.g. "A shares memories with B") occur in one-to-one correspondence – but unlike a mathematical well-definedness theorem, this fact is contingent rather than necessary, since it arises from the extreme technical difficulty of copying minds rather than the complete impossibility, and it's possible to imagine that becoming feasible in future. At which point, I think, the only rational approach will be to abandon the single concept of identity completely, and instead switch to a model in which we keep in mind which of the various more basic properties we mean at any one time, and talk about that one.
no subject
Date: 2011-05-11 12:23 pm (UTC)Thank you. Yes, that sounds like exactly the right way of thinking about it, I completely concur. So, as it happens, the concept of "me" is defined for the world we live in, but we will in fact have to reevaluate moral statements about it if we lived in a world with easy copying (or a many worlds multiverse).
In which case, the question is, "what would we do in that case" and I don't know.
It seemed disturbingly anthropocentric to say that it made a difference that somebody had computed the starting state of their new universe in deciding whether it subsequently "found itself in the dust"
Yes, that's exactly what bothered me, and the bit I found it hardest to believe (other than the idea of the anthropicity making the universe "melt"). I could see why he did it: as we agreed, almost all fiction like this needs to fudge something to make a reason why it matters, but it seemed to somewhat betray the premise. (The premise I also found... weird, but certainly couldn't refute :))
Surely ... a lot of people thought the obvious break was between #3 and #4
Well, I agree that it was an exaggeration to claim it was a _sequence_; I agree the ethics of teleporting are mostly orthogonal to the ethics of sleeping and growing old. But perhaps think of it as strong induction, rather than "normal" induction?
Some people watching (or in) startrek teleporters have a bad feeling that people's "souls" are being destroyed, but most science/science fiction secular people seem not to do so. They think the teleporter is unlikely, but if it works as presented (ie. glossing over the problem of if it's possible to keep a copy), they accept it.
I think most of my friendslist, if taking a rationality quiz or similar, will automatically say "souls are only an arrangement of neurons, so teleportation is fine if it works"[1].
[1] Of course, the problem with the quiz I'm thinking of is that it SAYS the teleporter is 100% safe and the space ship to the same destination only 50% safe, but some people understandably simply refuse to accept that as a boundary condition. I think it would be more productive, if enquiring about people's souls, to be more specific about WHY you think something is safe (eg. "you've seen it done billions of times and it's not gone wrong once" rather than "it's 100% safe and I'm not going to explain why", even if the second OUGHT to make sense as a hypothetical question.)
no subject
Date: 2011-05-11 12:39 pm (UTC)There's a novel I read recently which amusingly subverts this point, but having said that I now can't say which without it being a spoiler :-(
no subject
Date: 2011-05-11 12:45 pm (UTC)no subject
Date: 2011-05-11 09:39 am (UTC)What may be interesting philosophically is whether, when these processes occur, some "me" "dies" and another "me" appears with memories of all the experiences up to but not including that "death" but my care-level about such philosophical questions is really very very small...
no subject
Date: 2011-05-11 10:31 am (UTC)Supposing I hypnotise myself into thinking that I'm you, or hallucinate it as a response to drugs. Am I you? Are you me?
no subject
Date: 2011-05-11 11:36 am (UTC)no subject
Date: 2011-05-11 11:56 am (UTC)Am I not me?
(It occurs to me that I'm being a pedantic hairsplitter here - however, this entire thread is about pedantic hairsplitting. It's often useful to make a facetious obviously-wrong comment and then to analyse why exactly it's obviously wrong.)
no subject
Date: 2011-05-11 12:03 pm (UTC)I agree that's a meaningless distinction; whether something happened in the middle or not, if it's not detectible to either "me", it doesn't make any difference.
But would you really include the "digits of pi naaths", even though every conceivable different Naath's life will be encoded there?
no subject
Date: 2011-05-11 12:09 pm (UTC)no subject
Date: 2011-05-11 12:27 pm (UTC)no subject
Date: 2011-05-11 12:07 pm (UTC)Thought experiment.
In ten year's time, you'll be taken to the clone-o-mat, which basically acts as a matter photocopier. Someone will flip a coin, and two people will walk out, the original and the copy. No-one will know which was which.
One of those will be allowed to keep your bank balance, your job, your relationships, etc. The other will be spirited away somewhere to start a new life elsewhere. A team of top people will be assigned to making sure his experience is as average as possible (or fit some randomly-determined outcome) - they will educate him or addle his brain as appropriate, will give him an average bank balance and training for an average job, make him fitter or less fit to reach average fitness, etc - basically, they will try their hardest to undo every life choice you made before then.
How do you feel about these two future people? Are you the one which gets to be you? Which one do you have a responsibility for? Would you be more motivated to make sensible life choices if you decided to identify with the future you that got to be you, than if you decided to identify with the future you that's physically continuous with the current you?
no subject
Date: 2011-05-11 12:25 pm (UTC)For instance, many science-fictional copying mechanisms, have a "you" that has physical continuity and one that doesn't, or a "you" who goes through the usual process, and one who doesn't, so there's an inherent asymetry.
no subject
Date: 2011-05-11 12:37 pm (UTC)(Regardless, of course, of the question of souls and who gets which subjective experience; all that's required for this is that the subjective experience of the copied person looks at its recent memories and compares them with its current perception.)
no subject
Date: 2011-05-11 12:47 pm (UTC)Yes, exactly that. Someone's link to lesswrong on LJ touched on it too, and it was mentioned in the Prestige: it's not clear exactly how to think of it, but thinking of it like a 50/50 chance is better than most ways, even though that seems wrong. (Although, considering Peter's point, if you consider the "non"-copied you to be "the" you, maybe you should just expect to be him, and let the other cope as best he can...)
no subject
Date: 2011-05-11 12:57 pm (UTC)I definitely disagree with that! That way lies callous self-copying: if you act in the assumption that you're never going to find 'you' were the copy, and that whatever happens to the copy is Somebody Else's Problem, you'll be that much more willing in the first place to create copies to do unpleasant jobs you don't want to do yourself, and that doesn't seem like a good direction to be heading in.
(Practically in the objective sense, as well as morally and practically in the subjective-self-interest sense. If you spawn a new you to do the tedious chores and do so confidently expecting not to 'become' the copy, then the copy's first reaction will be to be extremely unpleasantly surprised, which doesn't bode well for its subsequent cooperation with your original plan!)
no subject
Date: 2011-05-11 02:55 pm (UTC)And from that, you get callously-copied people who escape into society at large, who write angsty autobiographies on DW, who go on copied-people rights marches, you end up with huge amounts of infighting between copied-people who are anti-copying and copied-people who self-copy in a copy-positive sort of way, you get kids thinking it would be terribly romantic to be a copied person and then end up disappointed when they turn out to become the original, you get people in the software and photocopying industries who find the terminology they have been using for decades is suddenly un-PC, it would just be a mess.
no subject
Date: 2011-05-11 09:46 pm (UTC)I agree in general it's probably not wise (or ethical?), but it reminds me of thoughts I had on living in a virtual world, that you probably _could_ use yourself as your own spam filter. If you fork yourself a few hundred times, leaving out the short term memory and anything that will actually change the simulation, but allowing each copy to process one email each, and then marking the ones where the copy found it worth noticing, and then trying to merge back together the copies who thought something worthwhile. In some sense, 9/10 of you are committing suicide[1] in favour of the rest of you, but also, it seems reasonable...
[1] For the sake of the plot, I imagined it would be possible but rare for one to revolt, but you'd probably just merge him back in, or give him his own processing slice to live in. But I admit that was purely a plot-driven assumption.