jack: (Default)
http://slatestarcodex.com/2017/08/09/the-lizard-people-of-alpha-draconis-1-decided-to-build-an-ansible/

Scott wrote another short story. As is usually the case, it's intriguing but there's also much to critique :) The aliens in the story develop great technology, and build an ansible out of negative average preference utilitarianism.

I have a lot of different thoughts inspired by this story. I don't think it's the sort of story where knowing what happens is a problem for reading it, but I will cut a detailed discussion just in case.

Spoilers )
jack: (Default)
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:

0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).

1. It generally doesn't help in thought examples like Simont's.

2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?

3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.

If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.

5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.

6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?

7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?

8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?

Active Recent Entries