jack: (Default)
[personal profile] jack
Utilitarianism is a very useful way of thinking, I think an advance over my previous conceptions. It might even be an ideal (say to incorporate into a non-sapient robot). It may be a reasonable (but imho flawed) representation of my ultimate moral aims. But I have a number of problems:

0. As normally stated, it's a bit unclear whether intentions or results matter. I normally interpret it as doing what is most likely to improve happiness (math: having the greatest happiness expectation).

1. It generally doesn't help in thought examples like Simont's.

2. How do you quantify happiness? And if you can, how to compare it between people? If two of us rate our happiness on a scale of 1-10, and they have different distributions, can you put them on a common scale?

3. Do you take into account people's feeling, like satisfaction at helping someone else? Or just your own? Do you consider contributions to far future happiness, eg. not-dropping-litter sets a good example, meaning fewer people overall drop litter? Obviously you should, but I defy you to know how much weight to put to that.

If you use these arguments you can fit utilitarianism to any situation, but it doesn't actually help you decide because you have to know the answer to know how much weight to put to the more intangible benefits.

5. I don't like the ruthlessness. According to the statemnet, if you arrest one (innocent) person as an example, which improves crime and makes life better for lots of people, that's good. Possibly that's bad in the long term, but you can't show that. And yet I don't agree with doing it.

6. What about death? Or, for that matter, not being born? You don't then experience any unhappiness, is it bad? How much so? Is it different for different people?

7. What about animals? Do they have the same consideration as humans? Any at all? I say somewhere inbetween, but how do you quantify that?

8. Anything I do in this country is probably irrelevent goodness-wise compared to getting a £40k job and sending it *all* to Africa. But most people don't, should a moral system let them choose between other options, or just stay like a broken compass pointing south-east?

Date: 2006-08-02 09:12 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
OK, I agree with your delineations between human, animal, plant, etc.

I can't justify it very well except to say it seems right to me, but this isn't a problem for utilitarianism specifically.

(I'm uncomfortable with any notion I can't actually test. How do you tell if something's self-aware? If an AI *seemed* intellegent would it have feelings? I'd say the question can't easily be answered, and probably decide to say it did, but I know some people would say it was just a simulation.)

Date: 2006-08-03 11:16 am (UTC)
From: [identity profile] douglas-reay.livejournal.com
There is no guarantee in life that all functions can be integrated analytically.

Nor that you will have full access to the infomation you would like before making moral choices.

I'm willing to accept that, in theory, a non-biological organism could achieve self awareness. That does raise problems with respect to the status of freezing the system or restarting multiple copies from backups. But perhaps sufficiently advanced nano technology might present those same quandries for biological organisms?

That doesn't mean I have an infallible way to tell if an organism is self aware (as opposed to being designed to fake it, in the way the butterflies with eyes on their wings try to fake being something they are not).

An interesting question is whether 'animal rights' should be granted to advanced but less than sentient computer programs.