jack: (Default)
http://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

Axiology, morality and law

I'm not sure how standard this is? But Scott described a three-way breakdown between axiology, morality, and law. Axiology being "which actions are right and which are wrong". And morality being a set of rules for "which principles should you follow, such that you have correct axiology as often as possible"? Partly from a "I can't evaluate each situation from scratch" standpoint, and partly a "we need rules that let us coexist with other people even when we disagree" standpoint. And law being "which principles should be codified and imposed on people".

And if there's an overwhelming axiological imperative, that can override morality (eg. in general you shouldn't do something bad in order to promote a greater good, but if the good is REALLY REALLY REALLY good and you're REALLY REALLY REALLY sure, maybe you should make an exception and feel really bad about it later). And an overwhelming moral imperative can override the law.

But that it's definitely useful to have a law, even if it's not perfect, and to have a morality, even if there are cases where it doesn't work perfectly.

And many moral dilemmas are essentially, "do you have a precise cut-off for when a general principle should override the immediate benefit in a particular situation" (spoiler: no, if it was codified it would already be a principle).

Philosophy

I assume this is one of the cases where everyone who's read more philosophy than me says, oh yes, that's obvious, we just didn't explain it clearly before because you didn't know to ask. And also one of those where Scott's not exactly completely right, but brings up important principles I wasn't previously thinking about.

Offsets

Confusingly, this was brought up in the middle of a post about offsets which I thought was interesting but imperfectly explained.

He's talking about when you can make up for a bad thing by doing more good things.

He disagrees with someone elseweb, who says "you can do it for small bad things but not for big bad things". I'm with him so far.

He uses the example of carbon offsets, which is where I'm confused, because to me that's not offsetting the morality, that's offsetting the *action*. If you emit some carbon and then capture it again, I don't think you can cancel that out entirely before considering its moral weight at all. (Whether the carbon offset WORKS as advertised might be a trickier question.)

Then he goes on to say, you can't usually offset morality, because keeping moral rules is useful for its own sake (in cultivating the habit of doing so, in setting a good example, in a stable society), so if you break one, doing more good things is better, but doesn't really make it ok.

But he theorises that doing something forbidden by axiology but not covered by a more general rule in morality, *could* be offset by unrelated good actions. And that sounds like a reasonable guess but I'm far from sure.
jack: (Default)
http://slatestarcodex.com/2017/08/09/the-lizard-people-of-alpha-draconis-1-decided-to-build-an-ansible/

Scott wrote another short story. As is usually the case, it's intriguing but there's also much to critique :) The aliens in the story develop great technology, and build an ansible out of negative average preference utilitarianism.

I have a lot of different thoughts inspired by this story. I don't think it's the sort of story where knowing what happens is a problem for reading it, but I will cut a detailed discussion just in case.

Spoilers )