Dec. 31st, 2010

jack: (Default)
Co-op are selling prominently positioned ice cubes. Good luck with that :)
jack: (Default)
The other day I drew an analogy which at the time felt very silly, but in retrospect I like a lot more. Many people have attempted to define a rational ethical system of some sort -- preferably one which stems from self-evident premises, but at the least, one which is consistent, and based on reasonably plausible premises. Which I would agree would be ideal, but given that none of the proposed systems have worked without advising massively unworkable or repugnant outcomes, I'm pessimistic there is a perfect system to be found.

It was implicit in this approach that you were attempt to decide an answer to questions of the form "given a choice between A and B (or A and inaction) which one should you choose?" And yes, there are definitely further questions to ask, but telling you how to act is a lot of what an ethical system is.

Then I drew the analogy. When google first became the pre-eminent search engine, someone commented that the problem they had solved was not finding which pages were relevant to the search terms (which had already yielded considerably to prior human effort), but to find which top ten of the potentially relevant pages were actually most useful.

Likewise, there are vast swathes of area where most popular systems of ethics would agree. And yes, there will be endless fiddly corner cases involving hypothetical moral dilemmas about people tied to trolley-card tracks where people will disagree -- and perhaps always disagree. The question to me is, what problem do we face more in everyday life: choosing between a few clear alternatives which solution is right? Or prioritising, out of millions of potential actions which are all good, which is most urgent?

The first is more dramatic. "Should I protect my family, or uphold the law?" But day-to-day the second is probably more common. But seeing it as an ethical dilemma may be actively unhelpful. If so, you're inclined to view any choice solely in terms of the opportunity-cost, discounting the effort spent choosing. Of 3,000,000 good things you might potentially do, if you do the 2000th best, you're WRONG because doing the 1st best would have a greater utility, even if only by a little. And yet, comparing utilities is not always guaranteed to work (?) Several modified ethical systems have tried to weight things to compensate for this.

But if you think of it as a todo-list, it immediately sounds more reasonable. Of these, things, which do you want to do first? You obviously don't want to ONLY do things at the bottom of the list. But if you do a few at the top, and a few at the bottom that are convenient to you, and so on, it's hard to argue with that. You've thrown away the internal and external expectation that you will do the perfect thing, freeing you to try to do good things as fast as you're able.

Active Recent Entries