The excluded middle of Hume's Fork
Aug. 18th, 2010 04:42 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Preamble
At some point I came to the realisation that there is a distinction between statements about how the world IS (facts which can be decided by evidence and reason) and statements about how the world should be, about aims and ethics, (which can be refined and studied with evidence and reason, but ultimately have to stem from somewhere else).[1]
This is an important distinction because if someone (or you yourself) says "X is clearly wrong", and attempts to justify it solely in terms of observation about facts, then you can be fairly sure that they either made a fallacy or introduced a hidden moral assumption somewhere -- often a FALSE one. It sounds obvious put like that, but people have spent thousands of years making that sort of statement.
Digression
There is a grey area concerning statistical facts and rules of thumb, eg. deciding if economic strategy X will be a good thing for the people of a country. That is, in principle, a factual question, but it may not actually be decidable on the evidence available (either because the evidence is too hard to gather, or it's so unlikely that it's not worth spending the time to analyse it in full), which means people have to take a provisional opinion about the best strategy, which is likely to be a mix of generalisations from the evidence that IS available, and of different priorities in terms of the desirability of the likely results.
However, I don't think that's conceptually a separate category. It's that factual questions are themselves very murky, and we spend most of our time operating with reasonable certainty, not absolute certainty, whether morality has anything to do with it or not.
Realisation
However, it now seems to me there IS an overlap. A set of statements which are somewhat purely factual, yet also somewhat faith/morality based.
For instance, statements that might fall into this category, and I generally agree with:
-Advancing human knowledge is of benefit both for me and for society in general
-Cooperation with other humans is of benefit both for me and for society in general
-Honesty is of benefit both for me and for society in general
And statements that I may not agree with:
- Royalty and nobility have an obligation to rule well
- God exists, and is the source of morality
In all cases I think people would say that, in principle, the statement is subject to factual testing. In principle, we might discover that, say, actually increasing human knowledge makes our lives worse in most ways. However, we have a large emotional commitment to holding to the belief as long as it's at all possible to do so.
This is interesting, because for purely factual questions, I would say it's incorrect to hold to them in the face of convincing evidence (it's often correct to hold to them provisionally while considering the evidence, but not to fight the evidence). People often do this, and I generally consider it the sign of a weak argument (or at least, an argument that needs a lot more work).
To put it in other terms, holding to the belief in the face of evidence fails the objectivity test of asking "what evidence WOULD it take to convince you?" as the response is often "I can't imagine anything that would", which is an indication that even someone SAYS its a factual belief, they're actually holding it on faith.
However, with the examples I described, I have to say, I do have to admit I would change my mind faced with OVERWHELMING evidence, but also I would cling to them as long as possible. And, even though it goes against my instincts for "how to be objective", I think that's the right thing to do.
Postamble
Realising I do have beliefs I cling to against evidence makes me much more sympathetic to other people who do so: with my previous belief structure I would basically have dismissed the idea.
On the other hand, there is still the question of how to choose such beliefs. Pure morality beliefs, like "murder is bad" and "harm is bad" I think most people agree on, even if they assign different weights to them. But with both, there is the question of how to decide on them.
To some extent they are plucked straight from our underlying drives. Why is murder bad? Because we instinctively think it is (because evolution or God made us that way, or because society trained us to). But which of these do we accept, and which do we dismiss as spurious? Most people accept harming other humans is bad. Most people accept harming stuffed toys with big eyes produces a negative emotional reaction, but isn't inherently bad. People disagree about humans from other tribes, about animals, about aliens. Is there a way of deciding, or do we have to accept that we were given no objective guide, and must adopt one on faith?
Footnotes
[1] And, um, some fundamental assumptions like "logic works" and "Occam's razor" which are arguably factual, yet also faith-based. I'm not sure how that fits into the system.
At some point I came to the realisation that there is a distinction between statements about how the world IS (facts which can be decided by evidence and reason) and statements about how the world should be, about aims and ethics, (which can be refined and studied with evidence and reason, but ultimately have to stem from somewhere else).[1]
This is an important distinction because if someone (or you yourself) says "X is clearly wrong", and attempts to justify it solely in terms of observation about facts, then you can be fairly sure that they either made a fallacy or introduced a hidden moral assumption somewhere -- often a FALSE one. It sounds obvious put like that, but people have spent thousands of years making that sort of statement.
Digression
There is a grey area concerning statistical facts and rules of thumb, eg. deciding if economic strategy X will be a good thing for the people of a country. That is, in principle, a factual question, but it may not actually be decidable on the evidence available (either because the evidence is too hard to gather, or it's so unlikely that it's not worth spending the time to analyse it in full), which means people have to take a provisional opinion about the best strategy, which is likely to be a mix of generalisations from the evidence that IS available, and of different priorities in terms of the desirability of the likely results.
However, I don't think that's conceptually a separate category. It's that factual questions are themselves very murky, and we spend most of our time operating with reasonable certainty, not absolute certainty, whether morality has anything to do with it or not.
Realisation
However, it now seems to me there IS an overlap. A set of statements which are somewhat purely factual, yet also somewhat faith/morality based.
For instance, statements that might fall into this category, and I generally agree with:
-Advancing human knowledge is of benefit both for me and for society in general
-Cooperation with other humans is of benefit both for me and for society in general
-Honesty is of benefit both for me and for society in general
And statements that I may not agree with:
- Royalty and nobility have an obligation to rule well
- God exists, and is the source of morality
In all cases I think people would say that, in principle, the statement is subject to factual testing. In principle, we might discover that, say, actually increasing human knowledge makes our lives worse in most ways. However, we have a large emotional commitment to holding to the belief as long as it's at all possible to do so.
This is interesting, because for purely factual questions, I would say it's incorrect to hold to them in the face of convincing evidence (it's often correct to hold to them provisionally while considering the evidence, but not to fight the evidence). People often do this, and I generally consider it the sign of a weak argument (or at least, an argument that needs a lot more work).
To put it in other terms, holding to the belief in the face of evidence fails the objectivity test of asking "what evidence WOULD it take to convince you?" as the response is often "I can't imagine anything that would", which is an indication that even someone SAYS its a factual belief, they're actually holding it on faith.
However, with the examples I described, I have to say, I do have to admit I would change my mind faced with OVERWHELMING evidence, but also I would cling to them as long as possible. And, even though it goes against my instincts for "how to be objective", I think that's the right thing to do.
Postamble
Realising I do have beliefs I cling to against evidence makes me much more sympathetic to other people who do so: with my previous belief structure I would basically have dismissed the idea.
On the other hand, there is still the question of how to choose such beliefs. Pure morality beliefs, like "murder is bad" and "harm is bad" I think most people agree on, even if they assign different weights to them. But with both, there is the question of how to decide on them.
To some extent they are plucked straight from our underlying drives. Why is murder bad? Because we instinctively think it is (because evolution or God made us that way, or because society trained us to). But which of these do we accept, and which do we dismiss as spurious? Most people accept harming other humans is bad. Most people accept harming stuffed toys with big eyes produces a negative emotional reaction, but isn't inherently bad. People disagree about humans from other tribes, about animals, about aliens. Is there a way of deciding, or do we have to accept that we were given no objective guide, and must adopt one on faith?
Footnotes
[1] And, um, some fundamental assumptions like "logic works" and "Occam's razor" which are arguably factual, yet also faith-based. I'm not sure how that fits into the system.