jack: (Default)
[personal profile] jack
My morality posts come along every so often (many archived in cartesiandaemon/tag/society and the abeyed series of posts Things I believe), but I generally find my understanding has shifted when I come back to questions of absolute morality. Thoughts I've proposed:

* The standard I would normally test things against is something like "is this the right thing for me to do in this situation"? I think most people could not define that in any terms more fundamental (no, not even with utilitarianism), but would have an intuitive idea that some things were and were not the right thing to do

* What I think of as my morality is a series of rules of thumb that approximate what I would think is right in a particular situation. (I'm not sure of that, it's something I've thought about. It's similar to what pjc said.)

* Morality is created by intelligent life, we might be able to agree a universal standard (I doubt it), but there's no fundamental property of the universe saying "THIS IS THE PURPOSE" the way there is saying "THIS IS GRAVITY". However, it typically behooves us to decide one and create it! (Obviously, many people would disagree, and say there was an inherent truth. I'd doubt, on the grounds that if it's unmeasurable...)

* Even if we were created by a greater being (effectively God, or God), his PoV isn't necessarily right. I realise this puts me into a philosophically tenuous position -- if I had the power to create the universe, and the aliens in it started doing things I thought were wrong, shouldn't I correct them? But surely, I can only use my moral sense to judge things by. I might be persuaded to trust someone else, but my actions are my responsibility and (to quote) "of course in my judgement! I don't have anyone else's judgement to use".

* Humans are a tribe species. I'm confident a lot of that informs what I feel is right. And I think that's ok -- I don't think things genetically programmed have a magic place in the hierarchy, but nor do I think I have an existence apart from the things that formed me. That's part of me, and I take responsibility for the whole of me.

* WRT subjective morality: obviously lots of details differ in different cultures, and it's not clear if any are better. Often doing what's appropriate is right, but I don't think that really counts as subjective morality.

* WRT subjective morality: obviously you have more luxury to choose in some situations than others. I'm vegetarian, but if I were a hunter/gatherer, I think the survival of me, my relations and my tribe is more important than that of the animals, whether or not I feel sad about it.

* WRT subjective morality: humans are a tribe species. I'm sure some other species doesn't have those imperatives, and would have a very different, or no, idea of morality in they were sapient. This makes me doubt that can be a universal answer. (There could be.)

* And that does mean, often I would disagree with someone else on what's right. And if it's irrelevant, that's ok, but if it's important, one may need to impose one's morality -- eg. we have police, partly in a self-interested measure to enforce cooperation, and partly to impose what we say is right on people who disagree (starting with murder, etc...)

* Of course, sometimes you think something is wrong but are not in a majority, and have to compromise with the other side, rather than try futilely to enforce it, whether you want to or not.

* In terms of what system of morality actually seems to make sense, I always start thinking in terms of utilitarianism. But I think the fundamental flaws are sufficient that won't actually provide a satisfactory framework (and nor will any other system I've heard, however good a start it makes).

* It's true, people's conceptions of what morality is vary wildly over time and geography; I know next to nothing about it, and it's an obvious place to start to consider where we may go next. Modern liberal people like me instinctively think in terms of "harm", but other cultures may think more in terms of "some things are wrong".

* A few things from previous posts. Euthyphro dilemma "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?" That is, if there is some sort of constant (if not universal) morality, is God good in that he follows said morality? Or did God design that morality? It may not make any practical difference.

And it's not relevant to every brand of Christianity or religion (one might well come to trust God without having a FAQ of more theoretical questions). However, it was the first question that occurred to me when people started to proselytize me at university. There was a lot more to the question that I didn't understand. But I was very disappointed to see most people didn't even understand the question, let alone have considered it, let alone have a widely-agreed answer. That's not necessary, but it felt like a fairly fundamental plot point to me, and for ages my list of top ten "interesting question to ask prosetylizers" (I am rather contrary, and used to be rather tactless) stopped at that one, as I never got any further... :)

* Hume's Fork. This refers to two different sorts of truth: what is, and what ought to be. The details are difficult to conceive (probably someone else will explain?), but the point is that you can't in general deduce the second from the first. You can see things, and deduce from them how them that if you do this, that will happen, etc. But to decide which things you should cause to happen, you need to make decisions about which are desirable.

Whereas many arguments start from a set of facts, and deduce we "should" do something, implicitly inserting some moral axiom. Which is the axiom is well agreed ("I should do this, else I'll get ill, and suffer") is fine. But if it's questionable ("blah is natural, therefore we should do it") is flawed.

This seemed like an obvious distinction once I'd thought of it (and later discovered Hume's essays on it, which were unsurprisingly much better thought out :)), but I think some people would disagree, and say some moral things can be deduced. Also, I don't think Hume's fork is a complete understanding, merely a good point you should have in your mind on the subject

* Relatedly, often differences in morals come from differences in implementation not just in what's right. If I think the government should do X, and you think Y, it's quite probable that both (a) I place a greater weight on what X is trying to achieve (b) I assume X is of greater relevance than Y, because I have more experience of what X is dealing with, and (c) I think X is more likely to have effective results. One of those is a question of morals, one of fact, and one in-between, but they're all tied[1] up.

That's no exhaustive summary (nor canonical answer), but a few of the things I've thought. (Of course, most have been argued here before, I apologise if I've re-brought-up something you thought was settled :))

[1] Typo: tided up.

Date: 2008-10-03 03:50 pm (UTC)
simont: A picture of me in 2016 (Default)
From: [personal profile] simont
What if one person thinks torturing and killing a Jew is morally right, and another thinks it is wrong. Can we say that either are correct? Or can we only say that they have different equally valid opinions?

I believe there is no objective morality, but I wouldn't say that the two had equally valid opinions. I would simply say that the two had opinions.

I think that the word "valid", applied to something like an opinion, is essentially meaningless. Applied to a proof or logical argument, it has a well defined meaning in terms of whether the argument conforms to the pre-agreed rules for such arguments; but there are no pre-agreed rules for opinions, since the only qualification for something to be called an opinion is that it should exist in someone's head and they should honestly believe it. (Indeed, the sort of internal inconsistency that would disqualify a logical argument from validity certainly does not seem to stop people from holding all sorts of opinions!)

But the word "valid", when used of an opinion, somehow seems to carry a connotation of, well, if not approval then at least admission of the opinion to some sort of club of basically "accepted" opinions. So I would argue that statements like your proposed one – taking an opinion widely considered atrocious and its opposite, and declaring that a moral relativist would consider the two "equally valid" – have the effect of insidious anti-relativist spin: they're suggesting by their choice of words that the moral relativist might consider both opinions acceptable in some sense, perhaps even equally so, without ever quite saying it in terms that can't be plausibly denied when the statement is challenged. But a relativist need not consider your Godwin-tempting pair of example opinions to be equally acceptable in any actually meaningful sense, beyond acknowledging that both of them are actual opinions actually held by people: I would certainly not accept the first, in the sense that if I heard it proposed I would argue against it in an attempt to persuade its holder out of it, or failing that to at least discourage them from acting on it and/or dissuade anyone else from being swayed by their claim.

If I were put in a philosophy classroom (or an equivalent LJ debate) and asked to comment on the two viewpoints, I would not be able to say that either was correct in any sense which I could defend with an unimpeachable argument from first principles. I would have to say that they are simply opinions: not "valid" or "invalid", just ideas in people's heads which affect their behaviour, and which may or may not agree with ideas on related subjects in other people's heads.

Anywhere else in the world, I most certainly can say that the second person was correct, which in this context is widely understood to mean the same thing as "it agrees with my own opinion on the matter". I can also apply social pressure to encourage my friends to hold similar opinions (or at least to keep them quiet if they didn't); I can vote for people who support criminal justice measures which help to deter people with the opposite opinion from putting it into practice; if I had children I could bring them up to share my opinions. All of these things are more useful in the cause of actually stopping people from doing it than a proof from first principles in a philosophy classroom would be even if one existed, so it doesn't seem vitally important to me that one doesn't.

Date: 2008-10-03 04:03 pm (UTC)
From: [identity profile] cartesiandaemon.livejournal.com
Ooh, very well put, thank you.