Responses to Utilitarianism
Aug. 2nd, 2006 12:44 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:
0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).
1. It generally doesn't help in thought examples like Simont's.
2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?
3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.
If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.
5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.
6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?
7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?
8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?
0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).
1. It generally doesn't help in thought examples like Simont's.
2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?
3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.
If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.
5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.
6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?
7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?
8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?
no subject
Date: 2006-08-02 08:25 pm (UTC)----
Ah, now that's an interesting question.
Utilitarianism only tells you which, out of several actions in a situation, are better than others (and maybe, by how much). To Utilitarianism, inactivity or 'not getting involved' is just another action. It doesn't have the concept built in of 'neutrality', of a 'goodness' of action level that is acceptable or unacceptable. The concepts of 'sin' and 'guilt' are not intrinsic to it.
If you like you can think of Utilitarianism as being orthogonal to a 'code of life'. How you decide to spend your life.
I personally don't find this surprising. If Utilitarianism is correct. I mean totally correct and unalterable. Universal. If Utilitarianism would apply even to blue furry creatures from Alpha Centauri, then it must be independant of human phisiology, phychology and society. And since what will work for a human, psychologically, what they can bear to live with on a long term basis without going insane, is very specifically human, Utilitarianism won't tell you that directly. It will just help you choose your rules of thumb for yourself.
Indeed it could be argued that any moral system that does try to specify something human specific must in some way be broken.
no subject
Date: 2006-08-02 09:28 pm (UTC)1. I think I take your points, but it still seems a fundamental problem. I'm considering buying a beer or donating to charity. I think that some of the time, if not most, buying a beer is fine, even though donating is better in some sense.
Utilitarianism contradicts that, afaict according to U, the better choice is always donating. If a moral system doesn't govern my actions, what is it?
2. I'm currently disdisposed to accept there can be any universal morality, though I still like to look for it. And I know lots of people disagree and I'm by no means certainof my tentative conclusion.
If intellegence evolved from solitary predators would it have any altruism to strangers at all? Why choose the morality we use rather than that they use?
I agree a universal morality would be good, but accept it inevitable whatever we come up with will be human-centred.
no subject
Date: 2006-08-03 10:03 am (UTC)This is a major question in philosophy, called the "is-ought" problem:
http://en.wikipedia.org/wiki/Is-ought_problem
no subject
Date: 2006-08-03 10:06 am (UTC)Ok, suppose you have the following decision before you:
A) try to live a righteous life, trying to be 100% altruistic with every action, and feel great guilt when ever you don't live up to that standard
B) try to live a 90% altruistic life, but with community support, by joining a monastic order or going to do charity work in Nigeria
C) try to live an above average altruistic life, setting up covenants to pay a certain sum to charity monthly by direct debit, and basically trying to be a nice bloke when you can.
D) life for your self, but try not to be actively evil (ie not work for companies that wreck lives)
E) dedicate your life to being a cold calling double glazing salesman, and use the money you earn to send spam advocating suicide and boredom.
Does Utilitarianism advocate Choice A?
I'd say not. On the grounds that if you decided to try to live your life by standard A, you'd almost certainly give it up after a few years at most. B would depend on your personality - not everyone is cut out for and can manage that life style. C on the other hand is achievable. :-)
Personally I like the works of Zhuangzi as a style guide on how to live.