Responses to Utilitarianism
Aug. 2nd, 2006 12:44 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:
0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).
1. It generally doesn't help in thought examples like Simont's.
2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?
3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.
If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.
5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.
6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?
7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?
8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?
0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).
1. It generally doesn't help in thought examples like Simont's.
2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?
3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.
If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.
5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.
6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?
7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?
8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?
no subject
Date: 2006-08-02 09:06 pm (UTC)I might hope I would be selfless enough volunteer, but don't think I want to force someone else to. But by a utilitarian argument you should.
no subject
Date: 2006-08-03 10:44 am (UTC)And the second where it is probabilistic. Such as having a shoot to kill policy on suspected bombers, knowing that X percent of the time you will shoot dead an innocent shopper in the subway, and Y percent of the time you will save hundreds of commuters from horrific injuries.
The latter isn't nice, and one can live with it, given a sufficiently extreme reward ratio.
The former is barbaric, and when legislating, you have to decide the effects on society. A case in point is the way newspapers crucify innocent celebrities in order to entertain millions of readers.
no subject
Date: 2006-08-04 05:00 pm (UTC)The former is barbaric, and when legislating, you have to decide the effects on society.
Hypothetically if there are no other effects, approximately no damage to society from people finding out, no great sense of outrage from the victim knowing, etc, when comparing the suffering imposed on someone through no choice of their own to the lesser pleasure given to many people, do you or do you not take into account the fact you're imposing it?
I'll leave the various other details for the next reply.
no subject
Date: 2006-08-04 05:01 pm (UTC)