Utilitarianism vs Virtue Ethics
Apr. 18th, 2013 12:53 pmUntil recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.
I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.
Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.
Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.
In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.
My thought, which I’m not yet very sure of, is the same applies to morality.
no subject
Date: 2013-04-18 01:56 pm (UTC)I suspect this actually happens. Not so much with killing someone for organ donation, which is pretty unlikely, but things like "We TELL everyone they should just always do X because Jesus said so. And maybe we occasionally bend the rules, but we only talk about that circumspectly with other priests". Which I always thought was a bit paternalistic, but also seems to actually serve a practical function.
Or, I remember a slightly more plausible medical dilemma where you're treating a murderous dictator, and should you make a "mistake"? And I think the right answer might be "probably not, but always strenuously deny you'd even consider it"...
no subject
Date: 2013-04-18 02:44 pm (UTC)(Possibly mechanism for intrinsically binding commitments - maintaining a facade of integrity is too difficult? too slow? too error-prone? too likely to become corrupt?)
These problems with integrity etc. are one thing that persuades me the Act Utilitarianism isn't the right thing; neither as a decision procedure nor as a standard of rightness. At least not Act Utilitarianism as commonly conceived; some of these odd decision theories I mentioned a while back might let you talk about something that behaves rather differently which maybe you could still call "Act"; I'm not sure whether that's an abuse of the terminology, though.
Thought experiment: if people were learning a happiness-maximising morality by trial and error or some other gradual learning process, what sort of thing would they converge on?
All of that said, it might be worth reading Sidgwick's The Methods of Ethics, or at least dipping into chapters. That is, if you don't mind wading through pages and pages of not very good writing.
no subject
Date: 2013-04-18 07:29 pm (UTC)