jack: (Default)
[personal profile] jack
Until recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?

But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.

I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.

Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.

Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.

In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.

My thought, which I’m not yet very sure of, is the same applies to morality.

Date: 2013-04-18 12:29 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
I think the word "doctrine", in more or less the military sense, might be useful here. As I understand the military usage of the term, it refers to a system of rules of thumb giving pre-prepared default answers to types of problem: "in this kind of situation, it's generally a good idea to do that". It forms a sort of layer between the end goal and the detailed implementation: as you say, if everybody making decisions in every situation is required to think through from first principles "what action best advances my absolute ultimate end goal?" then most of them will find it's an intractable problem and never get anywhere. But if you carry around not just your end goal but also a set of guidelines of the form "doing it like this has generally seemed to give good results", then you always have the option of going with the guidelines in cases where it isn't obvious that some other approach works better.

This "moral doctrine" material forms a sort of implementation layer on top of the underlying ethics, but it is not in and of itself a first-class moral entity: it's only a pragmatic means to an end. So on the one hand you have the option of ignoring doctrine as a one-off if you're confident that some particular situation is an exception to the doctrine's rule of thumb, and on the other hand you also have the option of permanently revising the doctrine if you realise at some point that some aspect of it is persistently failing to effectively satisfy the underlying ethic.

So the point of making the division between underlying ethics and doctrine is that it lets you work out whether you're currently interested in the doctrine-type stuff or not: if you're doing the sort of moral philosophy which is concerned above all with the basis of moral systems, then you probably don't care much about the doctrine, you only want to know what criteria the doctrine is assessed against. Whereas if you're actually considering questions of how to make decisions in practice, you probably don't change your mind very often about your underlying ethics and instead spend a lot more time pondering your moral doctrine.

Date: 2013-04-18 03:42 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
I'm sure J S Mill has something to say here:

Ah yes, grep for the paragraph starting: "Again, defenders of utility often find themselves called upon to reply to such objections as this—that there is not time, previous to action, for calculating and weighing the effects of any line of conduct on the general happiness." (well, actually, at some point go and read the whole thing, it will save you from making points that had already been cleared up in 1863). Anyway, "secondary principles", there's a thing. It's like with chess, you don't sit down at a chessboard and say "how do I get checkmate, maybe if I do this and he does that and I do this and..."

The complicated point I'm making elsewhere in this thread is a more subtle thing, and I think that the author for that is probably Derek Parfit.