jack: (Default)
Do we have free will?

Forget your preconceptions. If you ask, "what does it look like to have free will" or "what does it look like NOT to have free will", it rapidly becomes a lot more obvious what you actually mean by that question.

What does it mean to NOT have free will? Well, think of examples:

* Someone asks you to pick a number between 1 and 10. You felt free to choose any number! But it unbeknownst to you beforehand, it turned out that you were more likely to pick 7 than any other number
* You are trying to concentrate on something. But there's some delicious food / a person you're fond of / a persistent worry in your brain which keeps stopping you
* You're drunk, tired, too young, or otherwise impaired, and can't fully understand the options and have to choose between them based on a very limited understanding

All of those where you're TRYING to do something, but something stops you. Often something INSIDE YOUR BRAIN. And I mean, that bit is still you. "Being bad at choosing random numbers" is just as much a part of you as anything else.

I used to stop the argument there. That's who you ARE. It's still that 'you' choose. If you don't like that you're not a pure abstract reasoning machine, that's not a "don't have free will" problem.

I used to jokingly phrase it as a syllogism

Premise: You are actions are predicted by deterministic rules.
Premise: You control your actions
Conclusion: You ARE deterministic rules.

But that's not actually the whole story. Sometimes those "something in your brain stops you" happens more than others. Choosing to do something else instead of eating when you're hungry is HARD, but you usually can if you try hard enough. Some things are harder to suppress. And you can't choose to "not sleep" or "not breathe" by strength of will however much you try.

Sometimes more free will, sometimes less.

What about all those OTHER times. Everyone knows that sometimes something overrides your attempt to decide something. But the rest of the time, when it feels like you have a free choice, do you really?

Well, there's some other exceptions. But... we know a lot of the time, your brain is running on heuristics that produce weird answers like "the first choice offered" or "the choice most like what you remember from childhood". Even when we don't know that it's half-assing the analytical reasoning, it probably is some of the time. That's just who we are.

What about what when people say "no free will". Like, an implacable force forcing you to certain outcomes? Well, if you count the laws of physics, then yes, we already covered that. If you mean something else, then, "there's no way to tell, but probably not".

What about practical consequences? Well, yes, act like you have free will. And what about other people? Well, sometimes they do and sometimes they don't -- interact with them in ways that work, not ones that meet some theoretical standard of "fair".
jack: (Default)
I saw someone on tumblr say "Be virtue ethicist toward yourself, a deontologist towards others, and a utilitarian towards policy". I can't find the link now, I don't think I have the words exactly right.

But the more I think about it, the more I think, "isn't that the perfect description?"

Types of ethics

I tend to think of myself as utilitarian, even though I know it isn't perfect. In fact, I tend to think of *everyone* as utilitarian, as I think most people find the good thing about an ethical system, that it makes things better for themselves and others. Even if there are people who genuinely don't think that "I will do the right thing even if I'm damned for it".

However I think a big dollop of the other ethical systems is helpful in practice.

Self

The thing is, most of the time you're not facing a stark choice, "A or B." You're facing an endless series of choices, some small, some big, and will never get them all right, from a mix of "I don't have the energy to decide every case perfectly" and "I'm not that much of a saint (even if I should be)."

So cultivating a habit of choosing a virtuous choice is most of the time, more useful than agonising over the individual choice. A lot of good happens because of people who try to always be compassionate and are compassionate when it matters. A lot of harm happens when people think, oh it doesn't matter that much, don't I deserve something for myself, and get caught out when it DOES matter.

Others

When it comes to how you treat others, you want to follow your virtue ethics, but you need to default to some deontological rules too, because consistency is beneficial: e.g. usually not imposing on people who don't want you to, even if you think it would help.

And when it comes to your opinion of other people's morals, you can judge their intentions, and please do, help them if you can, but in practice, you often need to judge their actions: if they act harmfully, you may need to protect yourself, them, or others, regardless of WHY they act harmfully. If they act virtuously, it's not productive to second guess them.

Utilitarian

And when you're considering policy, you often don't have the luxury of doing what seems right, if something else is proved to be more helpful in practice, directly or indirectly.

Hm, now I'm not sure it made as much sense as when I first saw it, but I still keep thinking about it.
jack: (Default)
A recent conversation about Defence against the Dark arts teachers made me realise I use "evil" in two different ways. Sometimes I mean, "doing something bad on purpose". Sometimes I mean, "doing harm to other people". The greatest harm is often done by people who are indifferent to it. But people who maliciously hurt others are awful is a special way.

A couple of the professors were very indifferent-evil. They didn't set out to hurt people, but they didn't see any of the awful things they did to people. Others were malicious-evil, they were killing people all over the place.

And of course, it's more complicated by that. Most people who cause harm by inattention SHOULD notice, and exist somewhere on a scale from "I'm 8 and I haven't broken away from the worldview I'm immersed in" to "I'm really really really wilfully ignorant, and I must be actively avoiding thinking about this."

But insofar as it's helpful to be able to think about bad things, it's useful to realise that they often overlap, but when I say "very evil" I might mean one of two different things.
jack: (Default)
Scott Alexander made the point that even if two concepts don't have a clearly defined boundary, they can still have a clearly defined difference. Eg. there's no official number of pebbles that make a heap, or height that makes a building "tall", but most people would agree that two pebbles are not a heap, and 50 pebbles are, and that a bungalow isn't tall and that a skyscraper is.

Some concepts do come with a clearly defined boundary, and those are often useful concepts. But many concepts are useful with a clear difference even without a clear boundary.

It occurred to me the same might be said about truth. We may not have an absolute notion of what makes a statement true, but we can still say that some concepts are closer to it (eg. mathematical proofs), some are clearly in that direction (eg. well tested scientific theories, things you've lots of experience of), and some are less close, but still better than nothing.
jack: (Default)
Until recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?

But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.

I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.

Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.

Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.

In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.

My thought, which I’m not yet very sure of, is the same applies to morality.
jack: (Default)
Often you get an argument something like this:

Moderator: Should so-and-so be prosecuted for such-and-such act of violence that was sort of self defence but maybe not?
A: Of course! It was murder!
B: No it wasn't!
A: Yes it is!

This is similar to Scott's Worst Argument in the World

The real question is, "is this something that should be punished under the law, or not". Usually you can say "the law should punish murder" and "we all know what murder is", and get a helpful shortcut to the answer.

However, you don't have a god-given right to have your vocabulary do your thinking for you. People define murder in slightly different ways depending on context: any killing, unpleasant killing, illegal killing, immoral killing, aggressive killing, etc. And people have slightly different ideas of what's ok in self defence. But there's no particular reason to expect that those will correlate on difficult edge cases: in fact, if they're determined basically by chance, they probably won't.

The real question with real consequences is "what should the law do" (or "what should we do")?

If you answer the question of "should we call this murder" it makes it easier to talk about, but doesn't actually tell you anything about the best outcome. However, it's seductively easy for both A and B to assume that if they decide "is this murder" they've answered the only question that matters, hence you may have A and B disagreeing violently about whether it's murder, when in fact they've no idea whether they agree or not about what should be done about it.

I call "is it murder" the Nought Thousand Dollar Question, because it sounds really emotive and important, but actually has nearly zero consequence to the actual debate.
jack: (Default)
Diax's Rake

Diax's Rake says "Don't believe something simply because you want it to be true". It's from Anathem -- I'm not sure if there's a real-world name?

It sounds obvious, but in fact I keep coming across it in contexts where I hadn't realised "believing in something because I wanted to" was what people were doing.

For instance, the most common argument that there's an absolute standard of morality seems to be "But if we didn't, it would be really terrible. Blah blah blah Hitler." But that seems to be an argument for why it's not desirable to live in that world, but it offers no reason other than sheer optimism to think that we do live in that world.

But another case seems to be free will. Why do people think we have free will. It seems like the most common argument is "But if we didn't, it would be terrible! Our lives would be pointless, and we wouldn't be able to philosophically justify prison sentences." But again, that seems to be "we WANT to have free will", not "here's a reason to think it's LIKELY we have free will".

Free Will

However, that's somewhat misleading. I feel like at some point society started toying with the idea that "have free will" or "not have free will" made no seriously falsifiable assertions, even in principle.

At which point, some people said "Look, our future actions are basically predetermined by the physics of our minds. 'free will' is basically a meaningless concept."

And other said, "No, wait. Look at what we associate with 'free will': rights, responsibilities, choices, law, etc, etc. We do have all that, we don't care if it's predetermined or not. I think 'not having free will' is basically a meaningless concept."

And the thing is, THEY'RE BOTH RIGHT. "free will" being meaningless and "not having free will" being meaningless are exactly the same statement, they just SOUND like they're opposed. They're somewhat opposed: they agree how the world works, but disagree whether "free will" is an appropriate description to use to describe it.

And arguing about "should we use this word or not" is almost always pointless, with people regressing to assuming that they're still arguing for the concept they used to assocaite with the word, without recognising that the other people don't disagree, they're just doing the same thing.

Many people who know more about philosophy than me seem to be self-defining as compatibilists (the idea that free will and determinism aren't contradictory?) If someone says they're compatibilist, I generally find I completely agree with how they say the universe works. But I don't understand the assertion that free will exists. Is there a basis for that? It's not just pandering to people who have a really intense intuition that free will is a well-defined concept that exists, at the expense of alienating people who at some point because convinced it doesn't?
jack: (Default)
A standard warning is to avoid basing your actions on something being true simply because you want it to be true, not because you've any reason to think it is. Eg. Everyone agrees that "The ship can't be sinking, that would be terrible" is understandable if there's nothing you can do about it, but catastrophic if you're in charge of planning for lifeboats. But "There must be meaning in the universe, because otherwise it would be terrible" sounds very seductive, but is not conclusive for exactly the same reason!

However, a related but subtly different case, is something you want to believe because the effects of beleiving it are true. If believing X makes you or other people act in good ways, you are massively incentivised to want to believe X, and to hope X is true, and even to avoid any doubts about X, even if they're reasonable.

I think this is understandable and possibly wise, even though it leads to a sort of double-think of mainting the illusion that X, while simultaneously evaluating the truth of X, and trying to investigate alternatives to X, without ever admitting that's what you're doing.

However, I had a very startled moment, when I realised that many supposed rationalists (including me) tended to believe like an article of faith that it's better to make decisions based on true information. Yes, I think that's in general true. But if hypothetically I provided a celebrated atheist rationalist with evidence convincing in his/her eyes that actually, beleiving a popular religion WAS much more beneficial to humanity, would they go ahead and change their mind? In almost all cases, probably not.

That sounded understandable, but I realised that it was exactly the same process that many people use to justify a belief in heaven: that true or not, it's desirable that it's true, and then it's beneficial to believe it, so lets not rock the boat. Putting those two next to each other made me suddenly very uncertain.
jack: (Default)
Something I forgot to add in the previous post is that a very similar question applies to time travel.

Most stories[1] dealing with time travel implicitly take a stance on history being deterministic (either a single world track, or a steady loop in a multiverse of world-tracks), or being changeable.

But the narrative typically only follows one character through one worldtrack. Precisely because, if you start asking questions like "if you change history, what happens to all the people who were already there" and "can a narrative jump about between worldtracks", your brain will melt with indecision. It's almost the same question as having the ability to make multiple virtual copies; if there are lots, then which one is "the" one the story follows?

[1] Stories being the best proxies we have for "how we would deal with blah in real life"
jack: (Default)
A question oft posed in science fiction and amateur philosophy is what constitutes continuity of my existence. That is, when I'm saying "I want to do blah to achieve blah" what counts as "I" for this purpose?

A typical science-fictional spectrum of options is something like:

1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].

Greg Egan and Schlock Mercenary provide decent examples of several.

Read more... )
jack: (Default)
The other day I drew an analogy which at the time felt very silly, but in retrospect I like a lot more. Many people have attempted to define a rational ethical system of some sort -- preferably one which stems from self-evident premises, but at the least, one which is consistent, and based on reasonably plausible premises. Which I would agree would be ideal, but given that none of the proposed systems have worked without advising massively unworkable or repugnant outcomes, I'm pessimistic there is a perfect system to be found.

It was implicit in this approach that you were attempt to decide an answer to questions of the form "given a choice between A and B (or A and inaction) which one should you choose?" And yes, there are definitely further questions to ask, but telling you how to act is a lot of what an ethical system is.

Then I drew the analogy. When google first became the pre-eminent search engine, someone commented that the problem they had solved was not finding which pages were relevant to the search terms (which had already yielded considerably to prior human effort), but to find which top ten of the potentially relevant pages were actually most useful.

Likewise, there are vast swathes of area where most popular systems of ethics would agree. And yes, there will be endless fiddly corner cases involving hypothetical moral dilemmas about people tied to trolley-card tracks where people will disagree -- and perhaps always disagree. The question to me is, what problem do we face more in everyday life: choosing between a few clear alternatives which solution is right? Or prioritising, out of millions of potential actions which are all good, which is most urgent?

The first is more dramatic. "Should I protect my family, or uphold the law?" But day-to-day the second is probably more common. But seeing it as an ethical dilemma may be actively unhelpful. If so, you're inclined to view any choice solely in terms of the opportunity-cost, discounting the effort spent choosing. Of 3,000,000 good things you might potentially do, if you do the 2000th best, you're WRONG because doing the 1st best would have a greater utility, even if only by a little. And yet, comparing utilities is not always guaranteed to work (?) Several modified ethical systems have tried to weight things to compensate for this.

But if you think of it as a todo-list, it immediately sounds more reasonable. Of these, things, which do you want to do first? You obviously don't want to ONLY do things at the bottom of the list. But if you do a few at the top, and a few at the bottom that are convenient to you, and so on, it's hard to argue with that. You've thrown away the internal and external expectation that you will do the perfect thing, freeing you to try to do good things as fast as you're able.
jack: (Default)
The story

Hypothetical Hallie was hypothetically a biologist. (She was a "oh, look, nature" biologist rather than a "ooh, atoms" biologist). She observed the natural world, and lo, she observed that (more or less) animals and plants fell into various species. And that, lo, parts of plants could also be categorised: many plants had similar structures that "came from the same specific part of the flower" and "promulgated seeds" and fulfilled a couple of other criterions. She dubbed these "fruits".

This was extremely useful, as if Hallie wanted to make generalisations about the properties of fruits, she didn't necessarily need to deal with each individually: she could first check that each fruit obeyed each criterion, and then draw large inferences from observing only one criterion, and thence inferring that something was a fruit, and no doubt fulfilled the others as well.

However, this rosy simplistic picture did not last very long at all. Quickly, people came to Hallie and pointed out problems. Bacteria came in different sorts, but they didn't really fulfil the criteria to be called "species". "No problem," said Hallie. "We'll divide them roughly into species, but remember there are fuzzy edge cases where a bacteria doesn't really fit." People pointed to the duck-billed platypus. "It's not exactly a mammal, but what else can we call it?" "No problem," said Hallie, "we'll call it a mammal, but remember that there are a few exceptions to the criteria."

However, Naive Nelly was nowhere near as intelligent as Hallie, and rather idolised her. When she saw what she was doing, she also adopted all the clever new words Hallie used. However, she completely failed to notice that they were an (incredibly useful) thinking aid, not an inherent part of the structure of the universe. In fact, she was so stubborn she would often react violently if people tried to persuade her otherwise!

She did the same thing Hallie did, but with the concepts she was familiar with. For instance, she started adding things to the definition of "fruit" such as "succulent flesh good for eating" and "not a vegetable" and so on. And she was right, this was very useful. But she was wrong, because this usage different from Hallie's and eventually they came into conflict.

Hallie recognised that you could normally reason by saying "This has property X. Therefore it's a fruit. Therefore it's not a vegetable." But that sometimes, something had most but not all of the properties of a fruit, and then what could you do?

Nelly tried stubbornly to ensure that such things were, or were not, in the definition. Hallie tried repeatedly to reason with her. "Look, Nelly," she'd say. "It's just a shortcut. It's nice, but it DOESN'T ALWAYS WORK. When it doesn't work you have to actually turn your brain on and decide FOR YOURSELF whether something is a vegetable. Thinking about fruit DOESN'T HELP YOU."

But poor Nelly had got out of the habit of using her brain, and couldn't. She kept insisting that Hallie pick one, and eventually, poor Hallie, fed up of the argument, said "OK, it's much MORE like a fruit than not. In fact, the only criterion of being a fruit it fails was 'not being a vegetable' which was never my criterion in the first place."

But with other concepts, it was much harder to give Nelly a pat answer however much she insisted, and sometimes Hallie would find that if something fit many of the criteria but not some important ones, she would first answer one way, and then forget, and answer the other way.

And poor Nelly would see this as an inconsistency in Hallie, rather than an inconsistency in herself in insisting that things ALWAYS had to fit into categories!

The moral

Categorising things into "vegetables" and "not vegetables" is a useful cognitive short-cut, not a god-given right. Sometimes it doesn't work and you have to think for your fucking self, so sorry about that.

Some people have a bizarre cognitive flaw that "being disjoint from each other" is the most important feature of "being fruit" and "being vegetables". I do not know why people think that.

I think it's a misuse of generalisation. "The first 350 vegetables I examined were clearly not fruits, therefore I expect all the others won't be" is good reasoning. "Therefore all the others are, and I will refuse to change my mind in the face of the evidence, and ridicule and insult people who say otherwise" is TERRIBLE reasoning.

It's an edge case. It doesn't matter HOW you fix it, so long as you recognise that it's questionable, and that it REALLY DOESN'T MATTER. People who go around saying "oh, actually, technically, it's not a vegetable" have made the correct leap that knowledge is good, and many people are wrong about this, and they ought to be educated. But having achieved -- in their small opinion -- intellectual mastery by designating a tomato to be a fruit, and not a vegetable, they completely fail to think that maybe there was something more to say.

The important point is not "whether a tomato is a fruit or a vegetable" it's "is a tomato exactly one of a fruit and a vegetable?" or even "does a tomato fit neatly into either of these categories without fudging?" (Hint: it fits "botanical fruit" almost perfectly. It fits "culinary fruit" badly. It fits "vegetable" quite well but not completely.)

This sounds irrelevant. Sure, even if people ARE wrong about tomatoes it doesn't matter. But it happens to other words too. Eg. "Is copyright violation theft?" It shares SOME of the characteristics of theft BUT NOT MOST. Thus, it's MORE like "not theft" than "theft" but still doesn't fit perfectly. Hammering it until it fits one pigeonhole or the other makes good soundbites, but doesn't help really answer the question. You unfortunately can't decide if it's wrong solely by how much like theft it is: you have to turn your brain on and decide for yourself if it's wrong.

Even this is not THAT big a deal. But the same problem occurs with questions like "is this murder". Some things are clearly murder. Some things are clearly NOT murder. Some things are SOMEWHAT like murder, and insisting they are or are not will tend to polarise people into "MURDER!" and "NOT MURDER!" even if they may mostly agree on everything else about it.
jack: (Default)
There are many moral dilemmas of the form "given this unpalatable choice, how would you rate the choices?" Although I'm blessed with rarely facing such hard choices in real life (which does tend to give you a more pragmatic and less idealistic view) I find it interesting to track how my answers change over time.

A typical example would be: "A group of people from your village are hiding from insurgents who will slaughter you all if you're found. You're caring for a patient who has just become delirious and you can't shut them up and their cries are going to bring the soldiers. Do you keep him as quiet as possible and hope the soldiers somehow overlook the group? Or smother him and hope he somehow survives?"

There are many variations on this theme, for instance:

* Are you in a position of authority over the patient?
* Are you in a position of authority over the group?
* Is the dilemma horrendously contrived? (eg. "do you push person X onto the train track to stop a train full of people going over a cliff")
* Are the outcomes presented as certainties or high probabilities?
* Is the patient a volunteer or a soldier? An adult? A child? A baby?
* Does the patient stand a high chance of survival if you're caught? Any chance?

My responses now would generally be:

1. Death is bad.

Read more... )

Free Will

Dec. 9th, 2010 01:46 pm
jack: (Default)
In a recent discussion about free will, I asked the rhetorical question, "What would a universe WITHOUT free will be like? How would it be different?" I expected that to be unanswerable, but Liv inadvertently did give an answer.

She reminded me of the Benjamin_Libet experiments, which seemed to show some actions which we supposed to be of conscious volition were actually instigated unconsciously prior to any conscious recognition of the decision. Now, there are any number of quibbles with whether the experiments actually show anything relevant or not, but that's not actually very relevant to the point, because it seems exceptionally plausible that some decisions are made instinctively and have a rational justification tacked on only once it's under way.

However, it also seems likely that that's not always the case. There are pages of discussion on places like lesswrong.com about when it's best to decide instinctively, and when it's best to make explicit the reasoning process in order that every aspect of the decision is palatable to the conscious mind (even if some of them are subjective).

Liv also made the point that if my subconscious is part of me, I can't complain it's not me making the decision, although I argued that I might still object it wasn't free will if I was deciding for instinctual reasons and not the reasons I thought I was deciding for.

Having imagined a scenario where we don't have free will (in at least some ways), actually made me much more optimistic, and much more willing to use "free will" to describe the situation we're actually in (whereas previously I would have doubted "free will" having any meaning AT ALL, as I couldn't imagine any meaning of "me" other than "what makes my decisions" making the concept rather redundant).

On the other hand, having imagined a scenario where we don't</> have free will (in at least some ways), I am even less inclined to argue that the inability to delegate the decision to a mythical, spiritual, intangible, unfalsifiable "actual" us, rather than the actual physical us, is a drawback to free will.
jack: (Default)
If someone proposes an abstract argument with apparently impeccable premises and yet a difficult-to-accept conclusion (eg. Pascal's wager), it may be interesting for two reasons.

(1) There may be a serious chance it's true and you need to decide if it's valid or not, and whether or not you need to start accepting the conclusion.

(2) You may be pretty sure the conclusion is false, because you have watertight reasons you trust more than this argument however superficially plausible, but you can't understand WHY this argument is wrong, and want to do so, in order to rebut similar arguments made by yourself and others in the future.

It's normally clear which category something falls into, although we often don't explicitly say that. (1) is probably more important, if it involves ACTUALLY changing your mind about something, but saying "that's false" is not a reason to stop thinking about it if you can't explain WHY it's false.

For instance, in a recent discussion on free will, I realised that I was reasonably sure that (a) what choices I make in the future are inherent in the current state of the universe and my brain and (b) in general we should definitely ACT as if we have free will and it will Just Work. That pretty much settles the practical questions in my mind. But the questions of how to deal with what everyone believes about it, and what everyone feels about it, are still very much there.
jack: (Default)
I was recently musing about free will and other philosophical issues, and it occurred to me that I rarely post or comment about stuff I learned or figured out a long time ago and feel fairly settled about: I much more often post or comment about stuff I'm mulling over right now.

Which is natural, because that's what's taking up brainspace, and that's what I'm excited to know, and that's what I need helpful feedback about. But it manifests as rushing into related conversations to explain whatever it was I was just figuring out.

Whereas, when I'm fairly sure of something, I don't feel I have the time to bring it up everywhere it's relevant, even if someone is doing it wrong: I expect someone else to do that. Except when it's something I've made a personal crusade to educate people about.

This, while very natural and promoting animated discourse between people with a similar level of understanding, I think contributes for the tendency for people who know nothing, and people who pretty much grok the topic completely, to stay out of discussions, and discussions, especially controversial ones, to degenerate into massive bun-fights between people who know exactly just a bit about them.

If I manage to post whatever I'm musing about free will, I imagine I will get few people who've never thought about it to take an interest, and few people who are well read on the subject to educate me, but comparatively many people to chime in with "that's really interesting, I was thinking something similar, but X".
jack: (Default)
Preamble

At some point I came to the realisation that there is a distinction between statements about how the world IS (facts which can be decided by evidence and reason) and statements about how the world should be, about aims and ethics, (which can be refined and studied with evidence and reason, but ultimately have to stem from somewhere else).[1]

This is an important distinction because if someone (or you yourself) says "X is clearly wrong", and attempts to justify it solely in terms of observation about facts, then you can be fairly sure that they either made a fallacy or introduced a hidden moral assumption somewhere -- often a FALSE one. It sounds obvious put like that, but people have spent thousands of years making that sort of statement.

Digression

There is a grey area concerning statistical facts and rules of thumb, eg. deciding if economic strategy X will be a good thing for the people of a country. That is, in principle, a factual question, but it may not actually be decidable on the evidence available (either because the evidence is too hard to gather, or it's so unlikely that it's not worth spending the time to analyse it in full), which means people have to take a provisional opinion about the best strategy, which is likely to be a mix of generalisations from the evidence that IS available, and of different priorities in terms of the desirability of the likely results.

However, I don't think that's conceptually a separate category. It's that factual questions are themselves very murky, and we spend most of our time operating with reasonable certainty, not absolute certainty, whether morality has anything to do with it or not.

Realisation

However, it now seems to me there IS an overlap. A set of statements which are somewhat purely factual, yet also somewhat faith/morality based.

For instance, statements that might fall into this category, and I generally agree with:

-Advancing human knowledge is of benefit both for me and for society in general
-Cooperation with other humans is of benefit both for me and for society in general
-Honesty is of benefit both for me and for society in general

And statements that I may not agree with:

- Royalty and nobility have an obligation to rule well
- God exists, and is the source of morality

In all cases I think people would say that, in principle, the statement is subject to factual testing. In principle, we might discover that, say, actually increasing human knowledge makes our lives worse in most ways. However, we have a large emotional commitment to holding to the belief as long as it's at all possible to do so.

This is interesting, because for purely factual questions, I would say it's incorrect to hold to them in the face of convincing evidence (it's often correct to hold to them provisionally while considering the evidence, but not to fight the evidence). People often do this, and I generally consider it the sign of a weak argument (or at least, an argument that needs a lot more work).

To put it in other terms, holding to the belief in the face of evidence fails the objectivity test of asking "what evidence WOULD it take to convince you?" as the response is often "I can't imagine anything that would", which is an indication that even someone SAYS its a factual belief, they're actually holding it on faith.

However, with the examples I described, I have to say, I do have to admit I would change my mind faced with OVERWHELMING evidence, but also I would cling to them as long as possible. And, even though it goes against my instincts for "how to be objective", I think that's the right thing to do.

Postamble

Realising I do have beliefs I cling to against evidence makes me much more sympathetic to other people who do so: with my previous belief structure I would basically have dismissed the idea.

On the other hand, there is still the question of how to choose such beliefs. Pure morality beliefs, like "murder is bad" and "harm is bad" I think most people agree on, even if they assign different weights to them. But with both, there is the question of how to decide on them.

To some extent they are plucked straight from our underlying drives. Why is murder bad? Because we instinctively think it is (because evolution or God made us that way, or because society trained us to). But which of these do we accept, and which do we dismiss as spurious? Most people accept harming other humans is bad. Most people accept harming stuffed toys with big eyes produces a negative emotional reaction, but isn't inherently bad. People disagree about humans from other tribes, about animals, about aliens. Is there a way of deciding, or do we have to accept that we were given no objective guide, and must adopt one on faith?

Footnotes

[1] And, um, some fundamental assumptions like "logic works" and "Occam's razor" which are arguably factual, yet also faith-based. I'm not sure how that fits into the system.
jack: (Default)
The quiz http://www.philosophersnet.com/games/god.php which several people have linked to a while ago, and recently, attempts to measure how consistent is your belief in the existence or non-existence of God and some other philosophical questions. Which is a very interesting idea, although obviously most people find the quiz making incorrect assumptions about them at some point during it.

People pointed out its contrast between questions:
If, despite years of trying, no strong evidence or argument has been presented to show that there is a Loch Ness monster, it is rational to believe that such a monster does not exist.

and
As long as there are no compelling arguments or evidence that show that God does not exist, atheism is a matter of faith, not rationality.


I think the intention is to trip up people who think that in the absence of overt evidence, atheism is a bad assumption but a-loch-ness-monster-ism is a reasonable one, despite their similarities. Or to trip up people who find themselves unable to believe there (or aren't) compelling arguments against (or for) God (or Nessie), even if the question instructs them to do so. Although it undermines it somewhat by describing the absence of evidence in different ways, and by not making it clear if "no evidence after much trying" is supposed to be a hypothetical assumption, or truth, which invites people to have some hidden evidence they forgot to discount (depending if they're supposed to disagree with the assumption, or imagine it.)

However, it occurs to me that possibly a question they COULD have asked after the loch ness one, was, with similar wording, do you think it's rational to believe a loch LOMOND monster doesn't exist? They'd probably have the same answer, but I think people would be more certain about the loch lomond monster.

That is, even if you're instructed to discount the evidence for the loch ness monster, you instinctively put some weight onto the argument that "lots of people believe it might be true", even if you know most of them do so for spurious reasons.

Dualism

Apr. 8th, 2010 10:29 am
jack: (Default)
Arguing duality with Rachel's brother.
jack: (Default)
ETA: This could do with some more examples and some more boiling down, but I need to post it and sleep.

"This is wrong" and "this is a bad idea"

Because they sound similar, "this is wrong" and "this is a bad idea" are often confused. They sound similar, and are both things not to do, but I think there is a fundamental difference. "Harming other people is wrong" is a moral judgement. Whereas "too much ice-cream is a bad idea" is a heuristic: if given being unhealthy will be unpleasant or unfair on other people and given too much ice-cream then you can conclude that too much ice-cream will be ultimately unpleasant, even if nice in the short term.

The concept of utilitarianism began with (?) the insight that there WAS a difference. That some things are inherently harmful, but others _usually_ are and hence make good societal rules of thumb (aka "ethics") but stand to be re-evaluated in individual cases or if society changes[1].

Read more... )

Active Recent Entries