Utilitarianism vs Virtue Ethics
Apr. 18th, 2013 12:53 pmUntil recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
no subject
Date: 2013-04-18 12:54 pm (UTC)Perhaps my post could be best summarised as "if the implementation details take up 99% of the complexity, they're not details any more".
Like, in practice, utilitarians end up following doctrines for day-to-day life, but they don't have a culture of spending effort refining _good_ doctrines, they expect the abstract theory to just make it happen, and get muddled when it doesn't.
And virtue ethicists will eventually admit that, say, maybe sexual continence wasn't actually inherently a virtue, it was just the only practical course when you don't have any condoms or antibiotics. But it might take hundreds of years.
But both parties act as if what they're advocating is underlying ethics, and the details can be waved away, when in fact, what both parties say are right, but one is right about the underlying ethics and one is right about the doctrine..?
no subject
Date: 2013-04-18 03:42 pm (UTC)Ah yes, grep for the paragraph starting: "Again, defenders of utility often find themselves called upon to reply to such objections as this—that there is not time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness." (well, actually, at some point go and read the whole thing, it will save you from making points that had already been cleared up in 1863). Anyway, "secondary principles", there's a thing. It's like with chess, you don't sit down at a chessboard and say "how do I get checkmate, maybe if I do this and he does that and I do this and..."
The complicated point I'm making elsewhere in this thread is a more subtle thing, and I think that the author for that is probably Derek Parfit.