Utilitarianism vs Virtue Ethics
Apr. 18th, 2013 12:53 pmUntil recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
no subject
Date: 2013-04-18 12:29 pm (UTC)This "moral doctrine" material forms a sort of implementation layer on top of the underlying ethics, but it is not in and of itself a first-class moral entity: it's only a pragmatic means to an end. So on the one hand you have the option of ignoring doctrine as a one-off if you're confident that some particular situation is an exception to the doctrine's rule of thumb, and on the other hand you also have the option of permanently revising the doctrine if you realise at some point that some aspect of it is persistently failing to effectively satisfy the underlying ethic.
So the point of making the division between underlying ethics and doctrine is that it lets you work out whether you're currently interested in the doctrine-type stuff or not: if you're doing the sort of moral philosophy which is concerned above all with the basis of moral systems, then you probably don't care much about the doctrine, you only want to know what criteria the doctrine is assessed against. Whereas if you're actually considering questions of how to make decisions in practice, you probably don't change your mind very often about your underlying ethics and instead spend a lot more time pondering your moral doctrine.
no subject
Date: 2013-04-18 12:40 pm (UTC)Here's the thought to make your brain go all melty.
Suppose Action A does more good than Action B. However, suppose that being committed to performing Action B does more good than being committed to performing Action A. Should people prepare themselves to do B? Can people be blamed for doing B if they've committed themselves to do so? Does it make sense to say that doing B is wrong but praiseworthy and A is right but blameworthy (evidently you didn't commit yourself to doing B, bad you!)??
Jack Smart's paper has some discussion of the semantics of "right" and "wrong" here. I cite it, mainly to disagree with it, but as a paper it has thought through things pretty well; it's a great example of bullet-biting in action.
no subject
Date: 2013-04-18 12:54 pm (UTC)Perhaps my post could be best summarised as "if the implementation details take up 99% of the complexity, they're not details any more".
Like, in practice, utilitarians end up following doctrines for day-to-day life, but they don't have a culture of spending effort refining _good_ doctrines, they expect the abstract theory to just make it happen, and get muddled when it doesn't.
And virtue ethicists will eventually admit that, say, maybe sexual continence wasn't actually inherently a virtue, it was just the only practical course when you don't have any condoms or antibiotics. But it might take hundreds of years.
But both parties act as if what they're advocating is underlying ethics, and the details can be waved away, when in fact, what both parties say are right, but one is right about the underlying ethics and one is right about the doctrine..?
no subject
Date: 2013-04-18 12:58 pm (UTC)Something like http://lesswrong.com/lw/24o/eight_short_studies_on_excuses/ ?
I'm really not sure. It seems to lead to a large amount of bluster as people try to precommit to B as strongly as possible while trying to avoid actually doing it. (Which is one reason "lets all just talk this out reasonably" isn't the whole story -- people always have an incentive to present themselves as unwilling to make compromises.)
But I'm not sure if there's any better option, other than cultivating a reputation for following through on your precommitments, making them judiciously, and hoping for the best.
no subject
Date: 2013-04-18 01:07 pm (UTC)In the situation you describe, and assuming that Action A and Action B are incompatible with one another, I imagine a strict Utilitarian would see which of the following would do most overall good:
1) Committing to Action B but doing Action A (if this is possible)
2) Doing Action A without committing to Action B.
3) Committing to Action B without doing Action A.
If they all created an equal amount of overall good, then the Utilitarian could choose at random or according to some other criteria (for example which option increased their own personal utility the most).
Another strict Utilitarian would be likely to praise them for choosing to do whichever of these does most overall good, blame them for doing one of the other options, and neither praise nor blame them if they chose one of two or three 'top-ranking' options.
no subject
Date: 2013-04-18 01:19 pm (UTC)I understood Peter to be using "commitment" in the sense of committing yourself to it, but perhaps the apparent paradox is resolved (as so many are) if you look only at your possible actions.
In that case, if you have a way of enforcing a commitment to B (setting up a contract, or an automated system, etc) it may solve the problem.
And if not, you have to decide which is worse: doing B now, or promising to do B and breaking your word, reducing your ability to convincingly promise to do B in future similar situations. This always seems to be the difficult bit in utilitarianism -- comparing an immediate bad thing against "this will make society worse in the long term".
no subject
Date: 2013-04-18 01:45 pm (UTC)One of the things to wonder about is whether there are intrinsically binding commitments - I mean in a causal sense rather than a moral sense. Whether there's some thought you can think, some action you can take, that makes you more likely to act in a certain way, over and above the effects of other people seeing you make a commitment and building expectations around it. Or, if not commitments, things which are commitment-like.
If it wasn't at all possible, how much would people trust each other?
Also, extrinsic stuff. Imagine I'm writing a textbook on medical ethics. Suppose there's a chapter on whether you can kill your patients to get their organs and save more people (yes, yes, vexatious old chestnut, but bear me out). Supposing I write, "yes, go on, it's for the best" - is writing that for the best, if it will cause people to be afraid to go to hospital for fear of having their organs stolen? People don't have to wait for doctors to get caught stealing organs, they only have to read my textbook. Now generalise my medical ethics textbook to moral guidance in general.
no subject
Date: 2013-04-18 01:56 pm (UTC)I suspect this actually happens. Not so much with killing someone for organ donation, which is pretty unlikely, but things like "We TELL everyone they should just always do X because Jesus said so. And maybe we occasionally bend the rules, but we only talk about that circumspectly with other priests". Which I always thought was a bit paternalistic, but also seems to actually serve a practical function.
Or, I remember a slightly more plausible medical dilemma where you're treating a murderous dictator, and should you make a "mistake"? And I think the right answer might be "probably not, but always strenuously deny you'd even consider it"...
no subject
Date: 2013-04-18 02:19 pm (UTC)I understood Peter to be using "commitment" in the sense of committing yourself to it ...
Still confused about whether you mean "deciding", "promising" or something else (e.g. ensuring that you will be physically compelled to do it).
I don't think that's relevant to the basic method of deciding which option is most utilitarious* though - it just means you have a different number of options to consider.
This always seems to be the difficult bit in utilitarianism -- comparing an immediate bad thing against "this will make society worse in the long term".
Yes, this is difficult! But trying to do it and doing it imperfectly is often better than not trying.
And if it's not worth the effort of not trying, then not trying is the utilitarian thing to do.
* Nb this probably isn't a word.
no subject
Date: 2013-04-18 02:44 pm (UTC)(Possibly mechanism for intrinsically binding commitments - maintaining a facade of integrity is too difficult? too slow? too error-prone? too likely to become corrupt?)
These problems with integrity etc. are one thing that persuades me the Act Utilitarianism isn't the right thing; neither as a decision procedure nor as a standard of rightness. At least not Act Utilitarianism as commonly conceived; some of these odd decision theories I mentioned a while back might let you talk about something that behaves rather differently which maybe you could still call "Act"; I'm not sure whether that's an abuse of the terminology, though.
Thought experiment: if people were learning a happiness-maximising morality by trial and error or some other gradual learning process, what sort of thing would they converge on?
All of that said, it might be worth reading Sidgwick's The Methods of Ethics, or at least dipping into chapters. That is, if you don't mind wading through pages and pages of not very good writing.
no subject
Date: 2013-04-18 03:42 pm (UTC)Ah yes, grep for the paragraph starting: "Again, defenders of utility often find themselves called upon to reply to such objections as this—that there is not time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness." (well, actually, at some point go and read the whole thing, it will save you from making points that had already been cleared up in 1863). Anyway, "secondary principles", there's a thing. It's like with chess, you don't sit down at a chessboard and say "how do I get checkmate, maybe if I do this and he does that and I do this and..."
The complicated point I'm making elsewhere in this thread is a more subtle thing, and I think that the author for that is probably Derek Parfit.
no subject
Date: 2013-04-18 07:29 pm (UTC)