jack: (Default)
[personal profile] jack
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:

0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).

1. It generally doesn't help in thought examples like Simont's.

2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?

3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.

If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.

5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.

6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?

7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?

8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?

Date: 2006-08-03 10:14 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
Re: Party Choice

Then yes, if by turning up at Party A you will add a measure of happiness to 9 lives, and at Party B you will add a measure of happiness to 10 lives, to a first approximation you'd go to party B.

To a second approximation you might consider that the people at Party B are grumpy and don't stay happy long whatever the cause, whereas the people at Party A often stay happy for days and go out of their way to make other people happy during that time.

Or that the people at Party A have been really nice, and that it is socially useful to reward this by having nice things happen to them.

Or that you gave your word you'd go to Party A, and that's a socially useful concept to preserve.

Or that you'd be happier at Party A, and therefore would shine more and probably give 1.5 units of additional happiness to everyone there.

Date: 2006-08-04 04:48 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Oh yes, indeed, but most of the time the happinesses are notably non-equal. Let me rephrase the original point.

Utilitarianism is a good way of thinking about possible choices. For instance, it tells you not to be distracted by a notion of justice, or obeying the law, except insofar as they support utility.

However, I'm unclear how far it goes practically. AFAICT you can't in principle measure/compare happiness in someone else. You can have useful objective measures (eg. what they say, how you'd feel in that situation).

I don't know if utilitarians have specific such measures, and if so what, or for the actual choice rely on advice, rules of thumbs, and what feels right. Does utilitarianism mean one or the other, or could be either?

An example I feel illustrates this is, if the greatest difference in utility between my party choice is that good friend Adam will be pleased if I go to A, and good friend Bill will be cheered up if I go to B, can you, in theory, decide between them by any other means than asking yourself which would be better and doing that?