From palmer1984
Oct. 3rd, 2008 02:47 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
My morality posts come along every so often (many archived in cartesiandaemon/tag/society and the abeyed series of posts Things I believe), but I generally find my understanding has shifted when I come back to questions of absolute morality. Thoughts I've proposed:
* The standard I would normally test things against is something like "is this the right thing for me to do in this situation"? I think most people could not define that in any terms more fundamental (no, not even with utilitarianism), but would have an intuitive idea that some things were and were not the right thing to do
* What I think of as my morality is a series of rules of thumb that approximate what I would think is right in a particular situation. (I'm not sure of that, it's something I've thought about. It's similar to what pjc said.)
* Morality is created by intelligent life, we might be able to agree a universal standard (I doubt it), but there's no fundamental property of the universe saying "THIS IS THE PURPOSE" the way there is saying "THIS IS GRAVITY". However, it typically behooves us to decide one and create it! (Obviously, many people would disagree, and say there was an inherent truth. I'd doubt, on the grounds that if it's unmeasurable...)
* Even if we were created by a greater being (effectively God, or God), his PoV isn't necessarily right. I realise this puts me into a philosophically tenuous position -- if I had the power to create the universe, and the aliens in it started doing things I thought were wrong, shouldn't I correct them? But surely, I can only use my moral sense to judge things by. I might be persuaded to trust someone else, but my actions are my responsibility and (to quote) "of course in my judgement! I don't have anyone else's judgement to use".
* Humans are a tribe species. I'm confident a lot of that informs what I feel is right. And I think that's ok -- I don't think things genetically programmed have a magic place in the hierarchy, but nor do I think I have an existence apart from the things that formed me. That's part of me, and I take responsibility for the whole of me.
* WRT subjective morality: obviously lots of details differ in different cultures, and it's not clear if any are better. Often doing what's appropriate is right, but I don't think that really counts as subjective morality.
* WRT subjective morality: obviously you have more luxury to choose in some situations than others. I'm vegetarian, but if I were a hunter/gatherer, I think the survival of me, my relations and my tribe is more important than that of the animals, whether or not I feel sad about it.
* WRT subjective morality: humans are a tribe species. I'm sure some other species doesn't have those imperatives, and would have a very different, or no, idea of morality in they were sapient. This makes me doubt that can be a universal answer. (There could be.)
* And that does mean, often I would disagree with someone else on what's right. And if it's irrelevant, that's ok, but if it's important, one may need to impose one's morality -- eg. we have police, partly in a self-interested measure to enforce cooperation, and partly to impose what we say is right on people who disagree (starting with murder, etc...)
* Of course, sometimes you think something is wrong but are not in a majority, and have to compromise with the other side, rather than try futilely to enforce it, whether you want to or not.
* In terms of what system of morality actually seems to make sense, I always start thinking in terms of utilitarianism. But I think the fundamental flaws are sufficient that won't actually provide a satisfactory framework (and nor will any other system I've heard, however good a start it makes).
* It's true, people's conceptions of what morality is vary wildly over time and geography; I know next to nothing about it, and it's an obvious place to start to consider where we may go next. Modern liberal people like me instinctively think in terms of "harm", but other cultures may think more in terms of "some things are wrong".
* A few things from previous posts. Euthyphro dilemma "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?" That is, if there is some sort of constant (if not universal) morality, is God good in that he follows said morality? Or did God design that morality? It may not make any practical difference.
And it's not relevant to every brand of Christianity or religion (one might well come to trust God without having a FAQ of more theoretical questions). However, it was the first question that occurred to me when people started to proselytize me at university. There was a lot more to the question that I didn't understand. But I was very disappointed to see most people didn't even understand the question, let alone have considered it, let alone have a widely-agreed answer. That's not necessary, but it felt like a fairly fundamental plot point to me, and for ages my list of top ten "interesting question to ask prosetylizers" (I am rather contrary, and used to be rather tactless) stopped at that one, as I never got any further... :)
* Hume's Fork. This refers to two different sorts of truth: what is, and what ought to be. The details are difficult to conceive (probably someone else will explain?), but the point is that you can't in general deduce the second from the first. You can see things, and deduce from them how them that if you do this, that will happen, etc. But to decide which things you should cause to happen, you need to make decisions about which are desirable.
Whereas many arguments start from a set of facts, and deduce we "should" do something, implicitly inserting some moral axiom. Which is the axiom is well agreed ("I should do this, else I'll get ill, and suffer") is fine. But if it's questionable ("blah is natural, therefore we should do it") is flawed.
This seemed like an obvious distinction once I'd thought of it (and later discovered Hume's essays on it, which were unsurprisingly much better thought out :)), but I think some people would disagree, and say some moral things can be deduced. Also, I don't think Hume's fork is a complete understanding, merely a good point you should have in your mind on the subject
* Relatedly, often differences in morals come from differences in implementation not just in what's right. If I think the government should do X, and you think Y, it's quite probable that both (a) I place a greater weight on what X is trying to achieve (b) I assume X is of greater relevance than Y, because I have more experience of what X is dealing with, and (c) I think X is more likely to have effective results. One of those is a question of morals, one of fact, and one in-between, but they're all tied[1] up.
That's no exhaustive summary (nor canonical answer), but a few of the things I've thought. (Of course, most have been argued here before, I apologise if I've re-brought-up something you thought was settled :))
[1] Typo: tided up.
* The standard I would normally test things against is something like "is this the right thing for me to do in this situation"? I think most people could not define that in any terms more fundamental (no, not even with utilitarianism), but would have an intuitive idea that some things were and were not the right thing to do
* What I think of as my morality is a series of rules of thumb that approximate what I would think is right in a particular situation. (I'm not sure of that, it's something I've thought about. It's similar to what pjc said.)
* Morality is created by intelligent life, we might be able to agree a universal standard (I doubt it), but there's no fundamental property of the universe saying "THIS IS THE PURPOSE" the way there is saying "THIS IS GRAVITY". However, it typically behooves us to decide one and create it! (Obviously, many people would disagree, and say there was an inherent truth. I'd doubt, on the grounds that if it's unmeasurable...)
* Even if we were created by a greater being (effectively God, or God), his PoV isn't necessarily right. I realise this puts me into a philosophically tenuous position -- if I had the power to create the universe, and the aliens in it started doing things I thought were wrong, shouldn't I correct them? But surely, I can only use my moral sense to judge things by. I might be persuaded to trust someone else, but my actions are my responsibility and (to quote) "of course in my judgement! I don't have anyone else's judgement to use".
* Humans are a tribe species. I'm confident a lot of that informs what I feel is right. And I think that's ok -- I don't think things genetically programmed have a magic place in the hierarchy, but nor do I think I have an existence apart from the things that formed me. That's part of me, and I take responsibility for the whole of me.
* WRT subjective morality: obviously lots of details differ in different cultures, and it's not clear if any are better. Often doing what's appropriate is right, but I don't think that really counts as subjective morality.
* WRT subjective morality: obviously you have more luxury to choose in some situations than others. I'm vegetarian, but if I were a hunter/gatherer, I think the survival of me, my relations and my tribe is more important than that of the animals, whether or not I feel sad about it.
* WRT subjective morality: humans are a tribe species. I'm sure some other species doesn't have those imperatives, and would have a very different, or no, idea of morality in they were sapient. This makes me doubt that can be a universal answer. (There could be.)
* And that does mean, often I would disagree with someone else on what's right. And if it's irrelevant, that's ok, but if it's important, one may need to impose one's morality -- eg. we have police, partly in a self-interested measure to enforce cooperation, and partly to impose what we say is right on people who disagree (starting with murder, etc...)
* Of course, sometimes you think something is wrong but are not in a majority, and have to compromise with the other side, rather than try futilely to enforce it, whether you want to or not.
* In terms of what system of morality actually seems to make sense, I always start thinking in terms of utilitarianism. But I think the fundamental flaws are sufficient that won't actually provide a satisfactory framework (and nor will any other system I've heard, however good a start it makes).
* It's true, people's conceptions of what morality is vary wildly over time and geography; I know next to nothing about it, and it's an obvious place to start to consider where we may go next. Modern liberal people like me instinctively think in terms of "harm", but other cultures may think more in terms of "some things are wrong".
* A few things from previous posts. Euthyphro dilemma "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?" That is, if there is some sort of constant (if not universal) morality, is God good in that he follows said morality? Or did God design that morality? It may not make any practical difference.
And it's not relevant to every brand of Christianity or religion (one might well come to trust God without having a FAQ of more theoretical questions). However, it was the first question that occurred to me when people started to proselytize me at university. There was a lot more to the question that I didn't understand. But I was very disappointed to see most people didn't even understand the question, let alone have considered it, let alone have a widely-agreed answer. That's not necessary, but it felt like a fairly fundamental plot point to me, and for ages my list of top ten "interesting question to ask prosetylizers" (I am rather contrary, and used to be rather tactless) stopped at that one, as I never got any further... :)
* Hume's Fork. This refers to two different sorts of truth: what is, and what ought to be. The details are difficult to conceive (probably someone else will explain?), but the point is that you can't in general deduce the second from the first. You can see things, and deduce from them how them that if you do this, that will happen, etc. But to decide which things you should cause to happen, you need to make decisions about which are desirable.
Whereas many arguments start from a set of facts, and deduce we "should" do something, implicitly inserting some moral axiom. Which is the axiom is well agreed ("I should do this, else I'll get ill, and suffer") is fine. But if it's questionable ("blah is natural, therefore we should do it") is flawed.
This seemed like an obvious distinction once I'd thought of it (and later discovered Hume's essays on it, which were unsurprisingly much better thought out :)), but I think some people would disagree, and say some moral things can be deduced. Also, I don't think Hume's fork is a complete understanding, merely a good point you should have in your mind on the subject
* Relatedly, often differences in morals come from differences in implementation not just in what's right. If I think the government should do X, and you think Y, it's quite probable that both (a) I place a greater weight on what X is trying to achieve (b) I assume X is of greater relevance than Y, because I have more experience of what X is dealing with, and (c) I think X is more likely to have effective results. One of those is a question of morals, one of fact, and one in-between, but they're all tied[1] up.
That's no exhaustive summary (nor canonical answer), but a few of the things I've thought. (Of course, most have been argued here before, I apologise if I've re-brought-up something you thought was settled :))
[1] Typo: tided up.
no subject
Date: 2008-10-03 03:14 pm (UTC)Thank you :) I appreciate it.
I was meaning to make a post about objective versus subjective morality myself, but everyone else has beaten me to it :-)
I'm sure there is more to say :)
...is morally right, and another thinks it is wrong. Can we say that either are correct? Or can we only say that they have different equally valid opinions?
Can I pick a different example? But anyway, what I'm getting at there isn't how different people might see it -- that's the subject of the whole post, and I don't pretend my answers are definitive -- but rather that right-and-wrong are things that govern choices-of-actions, heading of any more metaphysical definitions that don't affect anything we can perceive. (I don't know if that would have come up, but it seemed a good place to start from.)
God is one possible explanation (I'd argue the best) for the existence of objective morality.
Agreed :) (Obviously, I'm currently believing in neither, but certianly god would have a lot to do with objective morality, and vice-versa, and share some of the same arguments for each.)
Possibly one route to go is to argue for the inference for the best explanation.
How do you mean? (I know I'm wading into murky waters with abandoning an objective morality; cf. a universe where everyone thinks something different than what I think is right. Though I still feel it's the right way to start...)
What does 'better' here mean?
I should have used more examples. What I mean is that in England it might be rude to say X, but in Australia, it might be perfectly normal. And by (most) standards of morality, it's better to avoid being rude without justification, even though this involves saying different things in different countries. And some examples may have a moral difference but some don't (in any/most moralities) and this is consistent with both subjective and objective morality.
Again, I wonder how you determine if someone is 'satisfactory'.
Well, I think most moral frameworks I've seen proposed I've thought were encapsulating something important (eg. utilitarianism is all about minimising harm/maximising good experiences, which I think is an important insight, whether or not it's ultimately right). But all of them have on closer inspection (a) inconsistencies and (b) advocated actions I disagree with.
Maybe that's too strong, but that's the impression I get.