![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
http://slatestarcodex.com/2017/08/09/the-lizard-people-of-alpha-draconis-1-decided-to-build-an-ansible/
Scott wrote another short story. As is usually the case, it's intriguing but there's also much to critique :) The aliens in the story develop great technology, and build an ansible out of negative average preference utilitarianism.
I have a lot of different thoughts inspired by this story. I don't think it's the sort of story where knowing what happens is a problem for reading it, but I will cut a detailed discussion just in case.
Alice in Wonderland stories
It's not the most helpful name, but there's a sort of story I think of as Alice-in-Wonderland stories. After the bits where she gets very frustrated with characters who blithely announce how things work, in a superficially reasonable way, but that actually make no sense.
I used to HATE that, because I hated things making no sense, and people just accepting that, even in parody. But clearly many people found the parody very cathartic, having a venue where you COULD call it out helped recognise it in the real world. And now I quite enjoy them.
Although I'm always disappointed when I can't argue the characters out of it.
I think the story is written from a place where it and the author and most of the readers assume you could not ACTUALLY build an ansible out of an abstract moral theory, but are interested in that concept being a story anyway. I think that's somehow different from a story where you actually COULD (that version would have experiments showing the various assumptions below are actually statistically provable in that fictional world).
Specifics on why it doesn't work
I think everyone recognises THAT it doesn't work, but Scott melds together several things, enough I found it interesting to actually list all the things that don't work.
1. "The arc of the moral universe is long, but it bends toward justice."
This quote is from an essay by unitarian minister Theodore Parker, famously quoted by Martin Luther King in a speech. The gist being something like, justice will prevail, or in that context, a society based on slavery is unstable.
Scott also quoted it extensively in his serial novel Unsong iirc which is where I was most familiar with it from.
However, I think it's clear that even though the story is pretending it's a reliable law, in fact, it's more of a mix of aspiration and best-guess. There's some truth to it, but it's not "it will just happen", it's something people fought to *make* happen.
2. "is long..."
Specifically calling out the middle of the previous quote, the story ends with the observation that if justice eventually happens, there's no reason to expect it to happen faster than light. Indeed, there's no way to tell if it's happened yet or not. If things keep getting worse, when do you cut your losses and decide justice won't prevail eventually?
3. Utilitarianism.
Even if there is some "correct" moral framework (there may be a best one, but I'm doubtful there's a "correct" one), which describes the best thing to do, your actions can only be based on knowledge you *have*. Utilitarianism usually specifically depends on that, there might be a best-in-theory action, but the best-for-you action is to do what you think is most likely to be correct based on information you have an can acquire, not magically know the future consequences of your actions when they're not really forseeable. (If you get it wrong repeatedly, you probably DO need to try harder, but if there's no way for you to know, and you take that uncertainty into account, that's not your fault.)
This is why the twist is, not really a twist: insofar as correct moral decisions at place A depend on the situation light-years away at place B, they do so BY the information being carried there in some information-carrying way, presumably a light-speed message.
Variants of utlitarianism
The story mentions negative something something utilitarianism. I have not following the discussions as far as that particular theory, although it does seem to attempt to answer the most common gaps in more common sorts of utilitarianism.
My rule of thumb is, I haven't heard any moral philosophy which gives coherently sensible answers to questions involving increasing the potential number of people existing. So if there is one, I want to make sure it doesn't have any of the obvious flaws before believing what it says. And in practice, if you just try to figure out the answer to those questions each separately, you usually get ok answers.
Simultaneously
In fact, I had suspicions earlier when the moral philosophy involved accounting for adding up something about people all over the universe at once. "At once". It didn't say at once, but that's what it implied. While of course, it may be clear what that means in practice, if your moral philosophy contains the phrase "relative to the reference frame of the centre of gravity of the local group of galaxies" it's clear it's... been a bit bodged to fit.
OTOH, I always criticise simultaneity problems when they show up, because they usually indicate something isn't completely self-consistent, but as an approximation there usually isn't a problem. If you assume everyone you care about is roughly stationary relative to each other (ie. on planets in the local group, or in this galaxy specifically), you can treat that reference frame as "the same time" and it will mostly be good enough.
If you take that assumption, but allow some FTL or teleporting between those places... you must be able to construct time-travel paradoxes somehow (because that's how lightspeed works). But under what circumstances would they actually happen?
Digressions
"Lizard people" has some history as an example of a hypothetical alien race, but also some history in racist conspiracy theories. Should I avoid it as a term in the first context, or not worry about it?
Scott wrote another short story. As is usually the case, it's intriguing but there's also much to critique :) The aliens in the story develop great technology, and build an ansible out of negative average preference utilitarianism.
I have a lot of different thoughts inspired by this story. I don't think it's the sort of story where knowing what happens is a problem for reading it, but I will cut a detailed discussion just in case.
Alice in Wonderland stories
It's not the most helpful name, but there's a sort of story I think of as Alice-in-Wonderland stories. After the bits where she gets very frustrated with characters who blithely announce how things work, in a superficially reasonable way, but that actually make no sense.
I used to HATE that, because I hated things making no sense, and people just accepting that, even in parody. But clearly many people found the parody very cathartic, having a venue where you COULD call it out helped recognise it in the real world. And now I quite enjoy them.
Although I'm always disappointed when I can't argue the characters out of it.
I think the story is written from a place where it and the author and most of the readers assume you could not ACTUALLY build an ansible out of an abstract moral theory, but are interested in that concept being a story anyway. I think that's somehow different from a story where you actually COULD (that version would have experiments showing the various assumptions below are actually statistically provable in that fictional world).
Specifics on why it doesn't work
I think everyone recognises THAT it doesn't work, but Scott melds together several things, enough I found it interesting to actually list all the things that don't work.
1. "The arc of the moral universe is long, but it bends toward justice."
This quote is from an essay by unitarian minister Theodore Parker, famously quoted by Martin Luther King in a speech. The gist being something like, justice will prevail, or in that context, a society based on slavery is unstable.
Scott also quoted it extensively in his serial novel Unsong iirc which is where I was most familiar with it from.
However, I think it's clear that even though the story is pretending it's a reliable law, in fact, it's more of a mix of aspiration and best-guess. There's some truth to it, but it's not "it will just happen", it's something people fought to *make* happen.
2. "is long..."
Specifically calling out the middle of the previous quote, the story ends with the observation that if justice eventually happens, there's no reason to expect it to happen faster than light. Indeed, there's no way to tell if it's happened yet or not. If things keep getting worse, when do you cut your losses and decide justice won't prevail eventually?
3. Utilitarianism.
Even if there is some "correct" moral framework (there may be a best one, but I'm doubtful there's a "correct" one), which describes the best thing to do, your actions can only be based on knowledge you *have*. Utilitarianism usually specifically depends on that, there might be a best-in-theory action, but the best-for-you action is to do what you think is most likely to be correct based on information you have an can acquire, not magically know the future consequences of your actions when they're not really forseeable. (If you get it wrong repeatedly, you probably DO need to try harder, but if there's no way for you to know, and you take that uncertainty into account, that's not your fault.)
This is why the twist is, not really a twist: insofar as correct moral decisions at place A depend on the situation light-years away at place B, they do so BY the information being carried there in some information-carrying way, presumably a light-speed message.
Variants of utlitarianism
The story mentions negative something something utilitarianism. I have not following the discussions as far as that particular theory, although it does seem to attempt to answer the most common gaps in more common sorts of utilitarianism.
My rule of thumb is, I haven't heard any moral philosophy which gives coherently sensible answers to questions involving increasing the potential number of people existing. So if there is one, I want to make sure it doesn't have any of the obvious flaws before believing what it says. And in practice, if you just try to figure out the answer to those questions each separately, you usually get ok answers.
Simultaneously
In fact, I had suspicions earlier when the moral philosophy involved accounting for adding up something about people all over the universe at once. "At once". It didn't say at once, but that's what it implied. While of course, it may be clear what that means in practice, if your moral philosophy contains the phrase "relative to the reference frame of the centre of gravity of the local group of galaxies" it's clear it's... been a bit bodged to fit.
OTOH, I always criticise simultaneity problems when they show up, because they usually indicate something isn't completely self-consistent, but as an approximation there usually isn't a problem. If you assume everyone you care about is roughly stationary relative to each other (ie. on planets in the local group, or in this galaxy specifically), you can treat that reference frame as "the same time" and it will mostly be good enough.
If you take that assumption, but allow some FTL or teleporting between those places... you must be able to construct time-travel paradoxes somehow (because that's how lightspeed works). But under what circumstances would they actually happen?
Digressions
"Lizard people" has some history as an example of a hypothetical alien race, but also some history in racist conspiracy theories. Should I avoid it as a term in the first context, or not worry about it?