jack: (Default)
[personal profile] jack
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:

0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).

1. It generally doesn't help in thought examples like Simont's.

2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?

3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.

If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.

5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.

6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?

7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?

8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?

The trash heap has spoken again

Date: 2006-08-02 12:04 pm (UTC)
From: [identity profile] dragonwoodshed.livejournal.com
7...

Fireworks make my greyhound very unhappy, he has to take valium. Thus ban fireworks.

Thankyou.

Re: The trash heap has spoken again

Date: 2006-08-02 12:29 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Mwa? What do you mean?

Date: 2006-08-02 05:47 pm (UTC)
ext_8103: (Default)
From: [identity profile] ewx.livejournal.com
Sending money indiscriminately to Africa risks it ending up in some petty dictator's swiss bank account. It might be worse than doing nothing. (Sending money discriminately to Africa of course is much better.)

Date: 2006-08-02 05:48 pm (UTC)
ext_8103: (Default)
From: [identity profile] ewx.livejournal.com
...at least, it might turn out worse than doing nothing. I can't say with any confidence that it would average out to worse than nothing if it was merely X% likely that it would end up propping up a despot.

Date: 2006-08-02 07:32 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
I'm sorry, I succumbed to my perrennial urge to make my example funny, and made it extremely misleading.

I assumed you would do something effective if anything, but assumed that that was obvious to need saying, so picked a flippant example at random, not meaning it to be completely literal, and maybe said the complete opposite of what I meant.

If you replace "getting... sending it to Africa" with "working for the benefit of people much worse off than almost anyone in this country" would my point have been clear?

Date: 2006-08-02 07:52 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?

To a first approximation, I'd assume all members of the species homo sapiens are equally good at experiencing 'happiness', and go with the NHS measure of expected quality life years. I am not sure I would use a self-rating scale.

And yes, this would lead me to valuing more the life of a 25 year old healthy normal person over a normal 75 year old, if I had to choose which to save the life of. This is often the sort of decision the NHS has to make, when they only have one donated organ available, and two patients.

To a second approximation, I might take into account the difference between a 75 year old brain surgeon and a 25 year old violent criminal.

Date: 2006-08-02 08:55 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Thank you for responding!

Date: 2006-08-03 11:18 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
Not at all. It is a facinating subject.

Would you mind if I linked to this thread from the ToothyWiki page?
http://www.toothycat.net/wiki/wiki.pl?Utilitarianism

Date: 2006-08-03 11:21 am (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Of course, please do.

Date: 2006-08-02 09:01 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
I was assuming your moral system should work equally well on my daily choices as for when I have to choose who lives.

If so, most of the time I have to make decisions like "Should I go to party A or party B" or "Should I work late or leave now and go to the pub" when life expectancy isn't an issue, but pleasing other people is.

Date: 2006-08-03 10:14 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
Re: Party Choice

Then yes, if by turning up at Party A you will add a measure of happiness to 9 lives, and at Party B you will add a measure of happiness to 10 lives, to a first approximation you'd go to party B.

To a second approximation you might consider that the people at Party B are grumpy and don't stay happy long whatever the cause, whereas the people at Party A often stay happy for days and go out of their way to make other people happy during that time.

Or that the people at Party A have been really nice, and that it is socially useful to reward this by having nice things happen to them.

Or that you gave your word you'd go to Party A, and that's a socially useful concept to preserve.

Or that you'd be happier at Party A, and therefore would shine more and probably give 1.5 units of additional happiness to everyone there.

Date: 2006-08-04 04:48 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Oh yes, indeed, but most of the time the happinesses are notably non-equal. Let me rephrase the original point.

Utilitarianism is a good way of thinking about possible choices. For instance, it tells you not to be distracted by a notion of justice, or obeying the law, except insofar as they support utility.

However, I'm unclear how far it goes practically. AFAICT you can't in principle measure/compare happiness in someone else. You can have useful objective measures (eg. what they say, how you'd feel in that situation).

I don't know if utilitarians have specific such measures, and if so what, or for the actual choice rely on advice, rules of thumbs, and what feels right. Does utilitarianism mean one or the other, or could be either?

An example I feel illustrates this is, if the greatest difference in utility between my party choice is that good friend Adam will be pleased if I go to A, and good friend Bill will be cheered up if I go to B, can you, in theory, decide between them by any other means than asking yourself which would be better and doing that?

Date: 2006-08-02 07:59 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.

----

Yes, I attempt to take into account other people's feelings, to the extent they can be predicted, and weighted for that uncertainty.

Yes, I consider long term implications of decisions, or try to, but (again) weighted for uncertainty. Eg, when considering nuclear vs coal, you need to consider greenhouse vs waste disposal.

In regards to how much exact weighting to put on setting examples, it depends on context. I'd argue that people are actually quite good at making instinctive estimates of this sort of thing. They have to be when raising children (ever sword around a kid, and had them repeat the word all day?). You can use online multi user games to study under controlled situations the effect of different ratios of defect/cooperate strategy. In fact I believe there are academic papers studying the subject, that link to the evolutionary justification for self-harming vindictiveness in groups of monkeys against rule breakers.

Date: 2006-08-02 10:24 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
1. Sorry, I was mainly thinking about comments your response to Simon, "But when choosing between a stranger and the known card cheat your motives do affect the outcome (it affects how you feel about yourself) and so therefore can be taken into account by Utilitarianism."

If the difference between the choices is which feels right to you, it seems circular, and the moral directive might as well be "do whatever feels right to you"

2. Hmmm. I'm not convinced it really works, but I suppose most moral systems have similar problems with indirect benefits/costs. Most require detailed riders (eg. follow the law unless you have a reason to do otherwise) about what to do when there isn't enough time to analyse a problem.

Date: 2006-08-03 10:19 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
If you want to compare moral systems, at the top level the division is:
* Consequentialism
* Deontological
* Virtue-Based

Some don't care about consequences at all, as long as your intentions were good.

Date: 2006-08-03 10:32 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
An aeroplane is about to crash into a school full of children. You are the only person in a position to stop it, and your only way to stop it is to shoot it down, in which case it will land instead on top of a prison full of criminals (including someone who nicked your bike lights last week).

The school contains 40 staff (average age 35) and 200 pupils (average age 15)

The prison contains 40 staff (average age 35) and X criminals (average age 25)

The number X is larger than 200. In fact it happens to be precisely that number at which you are not sure whether to shoot down the plane or not just considering the happiness caused to others by the lives saved and the expected quality life years of the individuals concerned.

At this point, the deciding factor might be spite, and how you would feel about yourself afterwards for taking the decision based upon that motive. IE it is a final feather on the scales, not the sole factor upon which you are making the decision.

Date: 2006-08-02 08:03 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.

----

How much don't you agree with doing it?

By and large, we get the police and justice system we pay for. If you'd like less rough justice, put in more money (appropriately spent). The question is, what would you be willing to give up in return for more highly paid and trained police?

Date: 2006-08-02 09:06 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
I was thinking more of a matter of policy, as a hypothetical example. Does forcing a large pain on one person justify happiness to other people?

I might hope I would be selfless enough volunteer, but don't think I want to force someone else to. But by a utilitarian argument you should.

Date: 2006-08-03 10:44 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
There's two similar questions. One, where you knowingly (and unnecessarily) force significant pain upon a single chosen innocent individual without their consent to create less happiness but to a larger number of people. That's throwing the Christians to the lions in order to entertain the masses.

And the second where it is probabilistic. Such as having a shoot to kill policy on suspected bombers, knowing that X percent of the time you will shoot dead an innocent shopper in the subway, and Y percent of the time you will save hundreds of commuters from horrific injuries.

The latter isn't nice, and one can live with it, given a sufficiently extreme reward ratio.

The former is barbaric, and when legislating, you have to decide the effects on society. A case in point is the way newspapers crucify innocent celebrities in order to entertain millions of readers.

Date: 2006-08-04 05:00 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
I agree, but may I concentrate on the part of the post which I think illustrates the point I was trying to make?

The former is barbaric, and when legislating, you have to decide the effects on society.

Hypothetically if there are no other effects, approximately no damage to society from people finding out, no great sense of outrage from the victim knowing, etc, when comparing the suffering imposed on someone through no choice of their own to the lesser pleasure given to many people, do you or do you not take into account the fact you're imposing it?

I'll leave the various other details for the next reply.

Date: 2006-08-04 05:01 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
PS. I think I do. And I admit that my moral system such as it is is a big compromise mess. But that it's a reason I don't like utilitarianism as stated.

Date: 2006-08-02 08:07 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?

----

Personally I'd be happier living in a society with a general ethic of respecting life (and human rights, dignity, etc), than in a society where they are not generally respected. In fact I think the evidence from the end of the cold war is that most people would be happier. Therefore, as a Utilitarian, I need to bear in mind the effect of my actions upon society.

Date: 2006-08-02 09:09 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Hmmm. True. But are you saying the only morally important factors of killing someone are the effects on other people, either who know him or on society generally?

A minute ago you were advocating QLY as a metric, that values life specifically.

Date: 2006-08-03 11:05 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
To clarify: they are both factors.

Basically, if an action has a consequence, you take it into account.

Date: 2006-08-02 08:15 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?

----

Personally I value self-awareness. I think your happiness matters more than a collection of computer variables deciding whether or not to move a robot arm away from a flame because you are aware of yourself, Simon, experiencing that happiness.

You have a individual personality - an identity.

Do cats, dogs, dolphins, elephants, whales and chimps have that? Most likely, to some extent. I'm not an expert, but I'm willing to grant them the benefit of the doubt, if not equivalence with humanity.

Do virii, trees or nematode worms have it? I think not. I'm willing to grant them some value for bio diversity, and because I'd prefer humans with the 'sanctity of life' meme over the 'torture them for fun' one.

Date: 2006-08-02 09:12 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
OK, I agree with your delineations between human, animal, plant, etc.

I can't justify it very well except to say it seems right to me, but this isn't a problem for utilitarianism specifically.

(I'm uncomfortable with any notion I can't actually test. How do you tell if something's self-aware? If an AI *seemed* intellegent would it have feelings? I'd say the question can't easily be answered, and probably decide to say it did, but I know some people would say it was just a simulation.)

Date: 2006-08-03 11:16 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
There is no guarantee in life that all functions can be integrated analytically.

Nor that you will have full access to the infomation you would like before making moral choices.

I'm willing to accept that, in theory, a non-biological organism could achieve self awareness. That does raise problems with respect to the status of freezing the system or restarting multiple copies from backups. But perhaps sufficiently advanced nano technology might present those same quandries for biological organisms?

That doesn't mean I have an infallible way to tell if an organism is self aware (as opposed to being designed to fake it, in the way the butterflies with eyes on their wings try to fake being something they are not).

An interesting question is whether 'animal rights' should be granted to advanced but less than sentient computer programs.

Date: 2006-08-02 08:25 pm (UTC)
From: [identity profile] douglas-reay.livejournal.com
8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?

----

Ah, now that's an interesting question.

Utilitarianism only tells you which, out of several actions in a situation, are better than others (and maybe, by how much). To Utilitarianism, inactivity or 'not getting involved' is just another action. It doesn't have the concept built in of 'neutrality', of a 'goodness' of action level that is acceptable or unacceptable. The concepts of 'sin' and 'guilt' are not intrinsic to it.

If you like you can think of Utilitarianism as being orthogonal to a 'code of life'. How you decide to spend your life.

I personally don't find this surprising. If Utilitarianism is correct. I mean totally correct and unalterable. Universal. If Utilitarianism would apply even to blue furry creatures from Alpha Centauri, then it must be independant of human phisiology, phychology and society. And since what will work for a human, psychologically, what they can bear to live with on a long term basis without going insane, is very specifically human, Utilitarianism won't tell you that directly. It will just help you choose your rules of thumb for yourself.

Indeed it could be argued that any moral system that does try to specify something human specific must in some way be broken.

Date: 2006-08-02 09:28 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Hmmm. That makes me think.

1. I think I take your points, but it still seems a fundamental problem. I'm considering buying a beer or donating to charity. I think that some of the time, if not most, buying a beer is fine, even though donating is better in some sense.

Utilitarianism contradicts that, afaict according to U, the better choice is always donating. If a moral system doesn't govern my actions, what is it?

2. I'm currently disdisposed to accept there can be any universal morality, though I still like to look for it. And I know lots of people disagree and I'm by no means certainof my tentative conclusion.

If intellegence evolved from solitary predators would it have any altruism to strangers at all? Why choose the morality we use rather than that they use?

I agree a universal morality would be good, but accept it inevitable whatever we come up with will be human-centred.

Date: 2006-08-03 10:03 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
Re: why be moral

This is a major question in philosophy, called the "is-ought" problem:
http://en.wikipedia.org/wiki/Is-ought_problem

Date: 2006-08-03 10:06 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
Re: beer vs charity

Ok, suppose you have the following decision before you:
A) try to live a righteous life, trying to be 100% altruistic with every action, and feel great guilt when ever you don't live up to that standard
B) try to live a 90% altruistic life, but with community support, by joining a monastic order or going to do charity work in Nigeria
C) try to live an above average altruistic life, setting up covenants to pay a certain sum to charity monthly by direct debit, and basically trying to be a nice bloke when you can.
D) life for your self, but try not to be actively evil (ie not work for companies that wreck lives)
E) dedicate your life to being a cold calling double glazing salesman, and use the money you earn to send spam advocating suicide and boredom.

Does Utilitarianism advocate Choice A?

I'd say not. On the grounds that if you decided to try to live your life by standard A, you'd almost certainly give it up after a few years at most. B would depend on your personality - not everyone is cut out for and can manage that life style. C on the other hand is achievable. :-)

Personally I like the works of Zhuangzi as a style guide on how to live.