jack: (Default)
A recent conversation about Defence against the Dark arts teachers made me realise I use "evil" in two different ways. Sometimes I mean, "doing something bad on purpose". Sometimes I mean, "doing harm to other people". The greatest harm is often done by people who are indifferent to it. But people who maliciously hurt others are awful is a special way.

A couple of the professors were very indifferent-evil. They didn't set out to hurt people, but they didn't see any of the awful things they did to people. Others were malicious-evil, they were killing people all over the place.

And of course, it's more complicated by that. Most people who cause harm by inattention SHOULD notice, and exist somewhere on a scale from "I'm 8 and I haven't broken away from the worldview I'm immersed in" to "I'm really really really wilfully ignorant, and I must be actively avoiding thinking about this."

But insofar as it's helpful to be able to think about bad things, it's useful to realise that they often overlap, but when I say "very evil" I might mean one of two different things.
jack: (Default)
Scott Alexander made the point that even if two concepts don't have a clearly defined boundary, they can still have a clearly defined difference. Eg. there's no official number of pebbles that make a heap, or height that makes a building "tall", but most people would agree that two pebbles are not a heap, and 50 pebbles are, and that a bungalow isn't tall and that a skyscraper is.

Some concepts do come with a clearly defined boundary, and those are often useful concepts. But many concepts are useful with a clear difference even without a clear boundary.

It occurred to me the same might be said about truth. We may not have an absolute notion of what makes a statement true, but we can still say that some concepts are closer to it (eg. mathematical proofs), some are clearly in that direction (eg. well tested scientific theories, things you've lots of experience of), and some are less close, but still better than nothing.
jack: (Default)
Until recently, I tended to follow an automatically utilitarian approach. After all, people would say, if one action does more good than another, how can it possibly be true than the second is the "right" one?

But it started to bother me, and I started to think maybe you needed both something like utilitarianism and something like virtue ethics.

I imagined using the same language to describe different ways of getting something right that’s somewhat less subjective and emotional than morality, say, the right way to write reliable computer programs.

Some people would say “here is a list of rules, follow them”. But this approach sucks when the technology and understanding get better, because you keep writing code that saves two bytes if it’s compiled on a 1990's 8-bit home computer, because that was a good idea at the time.

Other people would say, “choose whichever outcome will make the code more reliable in the long run”. That’s better than “follow these outdated rules”, but doesn't really tell you what to do hour-by-hour.

In fact, the people who do best seem to be those who have general principles they stick to, like “fix the security holes first”, even if it’s impossible to truly estimate the relative plusses and minusses of avoiding an eventual security breach versus adding a necessary new feature now. But they don’t stick to them blindly, and are willing to update or bypass those principles if the opposite is obviously better in some particular situation.

My thought, which I’m not yet very sure of, is the same applies to morality.
jack: (Default)
Often you get an argument something like this:

Moderator: Should so-and-so be prosecuted for such-and-such act of violence that was sort of self defence but maybe not?
A: Of course! It was murder!
B: No it wasn't!
A: Yes it is!

This is similar to Scott's Worst Argument in the World

The real question is, "is this something that should be punished under the law, or not". Usually you can say "the law should punish murder" and "we all know what murder is", and get a helpful shortcut to the answer.

However, you don't have a god-given right to have your vocabulary do your thinking for you. People define murder in slightly different ways depending on context: any killing, unpleasant killing, illegal killing, immoral killing, aggressive killing, etc. And people have slightly different ideas of what's ok in self defence. But there's no particular reason to expect that those will correlate on difficult edge cases: in fact, if they're determined basically by chance, they probably won't.

The real question with real consequences is "what should the law do" (or "what should we do")?

If you answer the question of "should we call this murder" it makes it easier to talk about, but doesn't actually tell you anything about the best outcome. However, it's seductively easy for both A and B to assume that if they decide "is this murder" they've answered the only question that matters, hence you may have A and B disagreeing violently about whether it's murder, when in fact they've no idea whether they agree or not about what should be done about it.

I call "is it murder" the Nought Thousand Dollar Question, because it sounds really emotive and important, but actually has nearly zero consequence to the actual debate.
jack: (Default)
Diax's Rake

Diax's Rake says "Don't believe something simply because you want it to be true". It's from Anathem -- I'm not sure if there's a real-world name?

It sounds obvious, but in fact I keep coming across it in contexts where I hadn't realised "believing in something because I wanted to" was what people were doing.

For instance, the most common argument that there's an absolute standard of morality seems to be "But if we didn't, it would be really terrible. Blah blah blah Hitler." But that seems to be an argument for why it's not desirable to live in that world, but it offers no reason other than sheer optimism to think that we do live in that world.

But another case seems to be free will. Why do people think we have free will. It seems like the most common argument is "But if we didn't, it would be terrible! Our lives would be pointless, and we wouldn't be able to philosophically justify prison sentences." But again, that seems to be "we WANT to have free will", not "here's a reason to think it's LIKELY we have free will".

Free Will

However, that's somewhat misleading. I feel like at some point society started toying with the idea that "have free will" or "not have free will" made no seriously falsifiable assertions, even in principle.

At which point, some people said "Look, our future actions are basically predetermined by the physics of our minds. 'free will' is basically a meaningless concept."

And other said, "No, wait. Look at what we associate with 'free will': rights, responsibilities, choices, law, etc, etc. We do have all that, we don't care if it's predetermined or not. I think 'not having free will' is basically a meaningless concept."

And the thing is, THEY'RE BOTH RIGHT. "free will" being meaningless and "not having free will" being meaningless are exactly the same statement, they just SOUND like they're opposed. They're somewhat opposed: they agree how the world works, but disagree whether "free will" is an appropriate description to use to describe it.

And arguing about "should we use this word or not" is almost always pointless, with people regressing to assuming that they're still arguing for the concept they used to assocaite with the word, without recognising that the other people don't disagree, they're just doing the same thing.

Many people who know more about philosophy than me seem to be self-defining as compatibilists (the idea that free will and determinism aren't contradictory?) If someone says they're compatibilist, I generally find I completely agree with how they say the universe works. But I don't understand the assertion that free will exists. Is there a basis for that? It's not just pandering to people who have a really intense intuition that free will is a well-defined concept that exists, at the expense of alienating people who at some point because convinced it doesn't?
jack: (Default)
A standard warning is to avoid basing your actions on something being true simply because you want it to be true, not because you've any reason to think it is. Eg. Everyone agrees that "The ship can't be sinking, that would be terrible" is understandable if there's nothing you can do about it, but catastrophic if you're in charge of planning for lifeboats. But "There must be meaning in the universe, because otherwise it would be terrible" sounds very seductive, but is not conclusive for exactly the same reason!

However, a related but subtly different case, is something you want to believe because the effects of beleiving it are true. If believing X makes you or other people act in good ways, you are massively incentivised to want to believe X, and to hope X is true, and even to avoid any doubts about X, even if they're reasonable.

I think this is understandable and possibly wise, even though it leads to a sort of double-think of mainting the illusion that X, while simultaneously evaluating the truth of X, and trying to investigate alternatives to X, without ever admitting that's what you're doing.

However, I had a very startled moment, when I realised that many supposed rationalists (including me) tended to believe like an article of faith that it's better to make decisions based on true information. Yes, I think that's in general true. But if hypothetically I provided a celebrated atheist rationalist with evidence convincing in his/her eyes that actually, beleiving a popular religion WAS much more beneficial to humanity, would they go ahead and change their mind? In almost all cases, probably not.

That sounded understandable, but I realised that it was exactly the same process that many people use to justify a belief in heaven: that true or not, it's desirable that it's true, and then it's beneficial to believe it, so lets not rock the boat. Putting those two next to each other made me suddenly very uncertain.
jack: (Default)
Something I forgot to add in the previous post is that a very similar question applies to time travel.

Most stories[1] dealing with time travel implicitly take a stance on history being deterministic (either a single world track, or a steady loop in a multiverse of world-tracks), or being changeable.

But the narrative typically only follows one character through one worldtrack. Precisely because, if you start asking questions like "if you change history, what happens to all the people who were already there" and "can a narrative jump about between worldtracks", your brain will melt with indecision. It's almost the same question as having the ability to make multiple virtual copies; if there are lots, then which one is "the" one the story follows?

[1] Stories being the best proxies we have for "how we would deal with blah in real life"
jack: (Default)
A question oft posed in science fiction and amateur philosophy is what constitutes continuity of my existence. That is, when I'm saying "I want to do blah to achieve blah" what counts as "I" for this purpose?

A typical science-fictional spectrum of options is something like:

1. Myself, 5 seconds from now
2. Myself, after I sleep
3. Myself, 20 years from now
4. If you destructively and exhaustively scanned my body at the atomic level and then reassembled it from new atoms so it functioned the same.
5. If you destructively and exhaustively scanned my body at the atomic (or maybe only neuron) level and then simulated it in a sufficiently accurate simulation in a really really accurate computer.
6. If you found the series of simple encodings of the successive simulation states of #5 embedded in the binary digits of pi[1].

Greg Egan and Schlock Mercenary provide decent examples of several.

Read more... )
jack: (Default)
The other day I drew an analogy which at the time felt very silly, but in retrospect I like a lot more. Many people have attempted to define a rational ethical system of some sort -- preferably one which stems from self-evident premises, but at the least, one which is consistent, and based on reasonably plausible premises. Which I would agree would be ideal, but given that none of the proposed systems have worked without advising massively unworkable or repugnant outcomes, I'm pessimistic there is a perfect system to be found.

It was implicit in this approach that you were attempt to decide an answer to questions of the form "given a choice between A and B (or A and inaction) which one should you choose?" And yes, there are definitely further questions to ask, but telling you how to act is a lot of what an ethical system is.

Then I drew the analogy. When google first became the pre-eminent search engine, someone commented that the problem they had solved was not finding which pages were relevant to the search terms (which had already yielded considerably to prior human effort), but to find which top ten of the potentially relevant pages were actually most useful.

Likewise, there are vast swathes of area where most popular systems of ethics would agree. And yes, there will be endless fiddly corner cases involving hypothetical moral dilemmas about people tied to trolley-card tracks where people will disagree -- and perhaps always disagree. The question to me is, what problem do we face more in everyday life: choosing between a few clear alternatives which solution is right? Or prioritising, out of millions of potential actions which are all good, which is most urgent?

The first is more dramatic. "Should I protect my family, or uphold the law?" But day-to-day the second is probably more common. But seeing it as an ethical dilemma may be actively unhelpful. If so, you're inclined to view any choice solely in terms of the opportunity-cost, discounting the effort spent choosing. Of 3,000,000 good things you might potentially do, if you do the 2000th best, you're WRONG because doing the 1st best would have a greater utility, even if only by a little. And yet, comparing utilities is not always guaranteed to work (?) Several modified ethical systems have tried to weight things to compensate for this.

But if you think of it as a todo-list, it immediately sounds more reasonable. Of these, things, which do you want to do first? You obviously don't want to ONLY do things at the bottom of the list. But if you do a few at the top, and a few at the bottom that are convenient to you, and so on, it's hard to argue with that. You've thrown away the internal and external expectation that you will do the perfect thing, freeing you to try to do good things as fast as you're able.
jack: (Default)
The story

Hypothetical Hallie was hypothetically a biologist. (She was a "oh, look, nature" biologist rather than a "ooh, atoms" biologist). She observed the natural world, and lo, she observed that (more or less) animals and plants fell into various species. And that, lo, parts of plants could also be categorised: many plants had similar structures that "came from the same specific part of the flower" and "promulgated seeds" and fulfilled a couple of other criterions. She dubbed these "fruits".

This was extremely useful, as if Hallie wanted to make generalisations about the properties of fruits, she didn't necessarily need to deal with each individually: she could first check that each fruit obeyed each criterion, and then draw large inferences from observing only one criterion, and thence inferring that something was a fruit, and no doubt fulfilled the others as well.

However, this rosy simplistic picture did not last very long at all. Quickly, people came to Hallie and pointed out problems. Bacteria came in different sorts, but they didn't really fulfil the criteria to be called "species". "No problem," said Hallie. "We'll divide them roughly into species, but remember there are fuzzy edge cases where a bacteria doesn't really fit." People pointed to the duck-billed platypus. "It's not exactly a mammal, but what else can we call it?" "No problem," said Hallie, "we'll call it a mammal, but remember that there are a few exceptions to the criteria."

However, Naive Nelly was nowhere near as intelligent as Hallie, and rather idolised her. When she saw what she was doing, she also adopted all the clever new words Hallie used. However, she completely failed to notice that they were an (incredibly useful) thinking aid, not an inherent part of the structure of the universe. In fact, she was so stubborn she would often react violently if people tried to persuade her otherwise!

She did the same thing Hallie did, but with the concepts she was familiar with. For instance, she started adding things to the definition of "fruit" such as "succulent flesh good for eating" and "not a vegetable" and so on. And she was right, this was very useful. But she was wrong, because this usage different from Hallie's and eventually they came into conflict.

Hallie recognised that you could normally reason by saying "This has property X. Therefore it's a fruit. Therefore it's not a vegetable." But that sometimes, something had most but not all of the properties of a fruit, and then what could you do?

Nelly tried stubbornly to ensure that such things were, or were not, in the definition. Hallie tried repeatedly to reason with her. "Look, Nelly," she'd say. "It's just a shortcut. It's nice, but it DOESN'T ALWAYS WORK. When it doesn't work you have to actually turn your brain on and decide FOR YOURSELF whether something is a vegetable. Thinking about fruit DOESN'T HELP YOU."

But poor Nelly had got out of the habit of using her brain, and couldn't. She kept insisting that Hallie pick one, and eventually, poor Hallie, fed up of the argument, said "OK, it's much MORE like a fruit than not. In fact, the only criterion of being a fruit it fails was 'not being a vegetable' which was never my criterion in the first place."

But with other concepts, it was much harder to give Nelly a pat answer however much she insisted, and sometimes Hallie would find that if something fit many of the criteria but not some important ones, she would first answer one way, and then forget, and answer the other way.

And poor Nelly would see this as an inconsistency in Hallie, rather than an inconsistency in herself in insisting that things ALWAYS had to fit into categories!

The moral

Categorising things into "vegetables" and "not vegetables" is a useful cognitive short-cut, not a god-given right. Sometimes it doesn't work and you have to think for your fucking self, so sorry about that.

Some people have a bizarre cognitive flaw that "being disjoint from each other" is the most important feature of "being fruit" and "being vegetables". I do not know why people think that.

I think it's a misuse of generalisation. "The first 350 vegetables I examined were clearly not fruits, therefore I expect all the others won't be" is good reasoning. "Therefore all the others are, and I will refuse to change my mind in the face of the evidence, and ridicule and insult people who say otherwise" is TERRIBLE reasoning.

It's an edge case. It doesn't matter HOW you fix it, so long as you recognise that it's questionable, and that it REALLY DOESN'T MATTER. People who go around saying "oh, actually, technically, it's not a vegetable" have made the correct leap that knowledge is good, and many people are wrong about this, and they ought to be educated. But having achieved -- in their small opinion -- intellectual mastery by designating a tomato to be a fruit, and not a vegetable, they completely fail to think that maybe there was something more to say.

The important point is not "whether a tomato is a fruit or a vegetable" it's "is a tomato exactly one of a fruit and a vegetable?" or even "does a tomato fit neatly into either of these categories without fudging?" (Hint: it fits "botanical fruit" almost perfectly. It fits "culinary fruit" badly. It fits "vegetable" quite well but not completely.)

This sounds irrelevant. Sure, even if people ARE wrong about tomatoes it doesn't matter. But it happens to other words too. Eg. "Is copyright violation theft?" It shares SOME of the characteristics of theft BUT NOT MOST. Thus, it's MORE like "not theft" than "theft" but still doesn't fit perfectly. Hammering it until it fits one pigeonhole or the other makes good soundbites, but doesn't help really answer the question. You unfortunately can't decide if it's wrong solely by how much like theft it is: you have to turn your brain on and decide for yourself if it's wrong.

Even this is not THAT big a deal. But the same problem occurs with questions like "is this murder". Some things are clearly murder. Some things are clearly NOT murder. Some things are SOMEWHAT like murder, and insisting they are or are not will tend to polarise people into "MURDER!" and "NOT MURDER!" even if they may mostly agree on everything else about it.
jack: (Default)
There are many moral dilemmas of the form "given this unpalatable choice, how would you rate the choices?" Although I'm blessed with rarely facing such hard choices in real life (which does tend to give you a more pragmatic and less idealistic view) I find it interesting to track how my answers change over time.

A typical example would be: "A group of people from your village are hiding from insurgents who will slaughter you all if you're found. You're caring for a patient who has just become delirious and you can't shut them up and their cries are going to bring the soldiers. Do you keep him as quiet as possible and hope the soldiers somehow overlook the group? Or smother him and hope he somehow survives?"

There are many variations on this theme, for instance:

* Are you in a position of authority over the patient?
* Are you in a position of authority over the group?
* Is the dilemma horrendously contrived? (eg. "do you push person X onto the train track to stop a train full of people going over a cliff")
* Are the outcomes presented as certainties or high probabilities?
* Is the patient a volunteer or a soldier? An adult? A child? A baby?
* Does the patient stand a high chance of survival if you're caught? Any chance?

My responses now would generally be:

1. Death is bad.

Read more... )

Free Will

Dec. 9th, 2010 01:46 pm
jack: (Default)
In a recent discussion about free will, I asked the rhetorical question, "What would a universe WITHOUT free will be like? How would it be different?" I expected that to be unanswerable, but Liv inadvertently did give an answer.

She reminded me of the Benjamin_Libet experiments, which seemed to show some actions which we supposed to be of conscious volition were actually instigated unconsciously prior to any conscious recognition of the decision. Now, there are any number of quibbles with whether the experiments actually show anything relevant or not, but that's not actually very relevant to the point, because it seems exceptionally plausible that some decisions are made instinctively and have a rational justification tacked on only once it's under way.

However, it also seems likely that that's not always the case. There are pages of discussion on places like lesswrong.com about when it's best to decide instinctively, and when it's best to make explicit the reasoning process in order that every aspect of the decision is palatable to the conscious mind (even if some of them are subjective).

Liv also made the point that if my subconscious is part of me, I can't complain it's not me making the decision, although I argued that I might still object it wasn't free will if I was deciding for instinctual reasons and not the reasons I thought I was deciding for.

Having imagined a scenario where we don't have free will (in at least some ways), actually made me much more optimistic, and much more willing to use "free will" to describe the situation we're actually in (whereas previously I would have doubted "free will" having any meaning AT ALL, as I couldn't imagine any meaning of "me" other than "what makes my decisions" making the concept rather redundant).

On the other hand, having imagined a scenario where we don't</> have free will (in at least some ways), I am even less inclined to argue that the inability to delegate the decision to a mythical, spiritual, intangible, unfalsifiable "actual" us, rather than the actual physical us, is a drawback to free will.
jack: (Default)
If someone proposes an abstract argument with apparently impeccable premises and yet a difficult-to-accept conclusion (eg. Pascal's wager), it may be interesting for two reasons.

(1) There may be a serious chance it's true and you need to decide if it's valid or not, and whether or not you need to start accepting the conclusion.

(2) You may be pretty sure the conclusion is false, because you have watertight reasons you trust more than this argument however superficially plausible, but you can't understand WHY this argument is wrong, and want to do so, in order to rebut similar arguments made by yourself and others in the future.

It's normally clear which category something falls into, although we often don't explicitly say that. (1) is probably more important, if it involves ACTUALLY changing your mind about something, but saying "that's false" is not a reason to stop thinking about it if you can't explain WHY it's false.

For instance, in a recent discussion on free will, I realised that I was reasonably sure that (a) what choices I make in the future are inherent in the current state of the universe and my brain and (b) in general we should definitely ACT as if we have free will and it will Just Work. That pretty much settles the practical questions in my mind. But the questions of how to deal with what everyone believes about it, and what everyone feels about it, are still very much there.
jack: (Default)
I was recently musing about free will and other philosophical issues, and it occurred to me that I rarely post or comment about stuff I learned or figured out a long time ago and feel fairly settled about: I much more often post or comment about stuff I'm mulling over right now.

Which is natural, because that's what's taking up brainspace, and that's what I'm excited to know, and that's what I need helpful feedback about. But it manifests as rushing into related conversations to explain whatever it was I was just figuring out.

Whereas, when I'm fairly sure of something, I don't feel I have the time to bring it up everywhere it's relevant, even if someone is doing it wrong: I expect someone else to do that. Except when it's something I've made a personal crusade to educate people about.

This, while very natural and promoting animated discourse between people with a similar level of understanding, I think contributes for the tendency for people who know nothing, and people who pretty much grok the topic completely, to stay out of discussions, and discussions, especially controversial ones, to degenerate into massive bun-fights between people who know exactly just a bit about them.

If I manage to post whatever I'm musing about free will, I imagine I will get few people who've never thought about it to take an interest, and few people who are well read on the subject to educate me, but comparatively many people to chime in with "that's really interesting, I was thinking something similar, but X".
jack: (Default)

At some point I came to the realisation that there is a distinction between statements about how the world IS (facts which can be decided by evidence and reason) and statements about how the world should be, about aims and ethics, (which can be refined and studied with evidence and reason, but ultimately have to stem from somewhere else).[1]

This is an important distinction because if someone (or you yourself) says "X is clearly wrong", and attempts to justify it solely in terms of observation about facts, then you can be fairly sure that they either made a fallacy or introduced a hidden moral assumption somewhere -- often a FALSE one. It sounds obvious put like that, but people have spent thousands of years making that sort of statement.


There is a grey area concerning statistical facts and rules of thumb, eg. deciding if economic strategy X will be a good thing for the people of a country. That is, in principle, a factual question, but it may not actually be decidable on the evidence available (either because the evidence is too hard to gather, or it's so unlikely that it's not worth spending the time to analyse it in full), which means people have to take a provisional opinion about the best strategy, which is likely to be a mix of generalisations from the evidence that IS available, and of different priorities in terms of the desirability of the likely results.

However, I don't think that's conceptually a separate category. It's that factual questions are themselves very murky, and we spend most of our time operating with reasonable certainty, not absolute certainty, whether morality has anything to do with it or not.


However, it now seems to me there IS an overlap. A set of statements which are somewhat purely factual, yet also somewhat faith/morality based.

For instance, statements that might fall into this category, and I generally agree with:

-Advancing human knowledge is of benefit both for me and for society in general
-Cooperation with other humans is of benefit both for me and for society in general
-Honesty is of benefit both for me and for society in general

And statements that I may not agree with:

- Royalty and nobility have an obligation to rule well
- God exists, and is the source of morality

In all cases I think people would say that, in principle, the statement is subject to factual testing. In principle, we might discover that, say, actually increasing human knowledge makes our lives worse in most ways. However, we have a large emotional commitment to holding to the belief as long as it's at all possible to do so.

This is interesting, because for purely factual questions, I would say it's incorrect to hold to them in the face of convincing evidence (it's often correct to hold to them provisionally while considering the evidence, but not to fight the evidence). People often do this, and I generally consider it the sign of a weak argument (or at least, an argument that needs a lot more work).

To put it in other terms, holding to the belief in the face of evidence fails the objectivity test of asking "what evidence WOULD it take to convince you?" as the response is often "I can't imagine anything that would", which is an indication that even someone SAYS its a factual belief, they're actually holding it on faith.

However, with the examples I described, I have to say, I do have to admit I would change my mind faced with OVERWHELMING evidence, but also I would cling to them as long as possible. And, even though it goes against my instincts for "how to be objective", I think that's the right thing to do.


Realising I do have beliefs I cling to against evidence makes me much more sympathetic to other people who do so: with my previous belief structure I would basically have dismissed the idea.

On the other hand, there is still the question of how to choose such beliefs. Pure morality beliefs, like "murder is bad" and "harm is bad" I think most people agree on, even if they assign different weights to them. But with both, there is the question of how to decide on them.

To some extent they are plucked straight from our underlying drives. Why is murder bad? Because we instinctively think it is (because evolution or God made us that way, or because society trained us to). But which of these do we accept, and which do we dismiss as spurious? Most people accept harming other humans is bad. Most people accept harming stuffed toys with big eyes produces a negative emotional reaction, but isn't inherently bad. People disagree about humans from other tribes, about animals, about aliens. Is there a way of deciding, or do we have to accept that we were given no objective guide, and must adopt one on faith?


[1] And, um, some fundamental assumptions like "logic works" and "Occam's razor" which are arguably factual, yet also faith-based. I'm not sure how that fits into the system.
jack: (Default)
The quiz http://www.philosophersnet.com/games/god.php which several people have linked to a while ago, and recently, attempts to measure how consistent is your belief in the existence or non-existence of God and some other philosophical questions. Which is a very interesting idea, although obviously most people find the quiz making incorrect assumptions about them at some point during it.

People pointed out its contrast between questions:
If, despite years of trying, no strong evidence or argument has been presented to show that there is a Loch Ness monster, it is rational to believe that such a monster does not exist.

As long as there are no compelling arguments or evidence that show that God does not exist, atheism is a matter of faith, not rationality.

I think the intention is to trip up people who think that in the absence of overt evidence, atheism is a bad assumption but a-loch-ness-monster-ism is a reasonable one, despite their similarities. Or to trip up people who find themselves unable to believe there (or aren't) compelling arguments against (or for) God (or Nessie), even if the question instructs them to do so. Although it undermines it somewhat by describing the absence of evidence in different ways, and by not making it clear if "no evidence after much trying" is supposed to be a hypothetical assumption, or truth, which invites people to have some hidden evidence they forgot to discount (depending if they're supposed to disagree with the assumption, or imagine it.)

However, it occurs to me that possibly a question they COULD have asked after the loch ness one, was, with similar wording, do you think it's rational to believe a loch LOMOND monster doesn't exist? They'd probably have the same answer, but I think people would be more certain about the loch lomond monster.

That is, even if you're instructed to discount the evidence for the loch ness monster, you instinctively put some weight onto the argument that "lots of people believe it might be true", even if you know most of them do so for spurious reasons.


Apr. 8th, 2010 10:29 am
jack: (Default)
Arguing duality with Rachel's brother.
jack: (Default)
ETA: This could do with some more examples and some more boiling down, but I need to post it and sleep.

"This is wrong" and "this is a bad idea"

Because they sound similar, "this is wrong" and "this is a bad idea" are often confused. They sound similar, and are both things not to do, but I think there is a fundamental difference. "Harming other people is wrong" is a moral judgement. Whereas "too much ice-cream is a bad idea" is a heuristic: if given being unhealthy will be unpleasant or unfair on other people and given too much ice-cream then you can conclude that too much ice-cream will be ultimately unpleasant, even if nice in the short term.

The concept of utilitarianism began with (?) the insight that there WAS a difference. That some things are inherently harmful, but others _usually_ are and hence make good societal rules of thumb (aka "ethics") but stand to be re-evaluated in individual cases or if society changes[1].

Read more... )
jack: (fic)
I was just in the process of writing a philosophy-heavy post, and realised one of the problems I had is that given the number of people in the world at all, and the number of comparatively educated ones on the internet, anything you say is extremely likely to have been said before, most probably somewhere a simple google search could turn it up, and things relating to popular or long-studied topics ten times more so.

With some topics, like "wow, scene X in that film was awesome" or "wow, love really DOES feel wonderful/hurt" we accept that many people probably have similar ideas, but that's ok.

Read more... )

So, when an interesting idea occurs to me, I don't know whether to muse about it publicly (which is interesting to some people and tedious to others) or commit either to studying philosophy seriously, or to not bothering.

ETA: Do you feel like that?
jack: (Default)

Newcomb's paradox on wikiepedia
Newcomb's paradox on Overcoming Bias blog
Newcomb's paradox on Scott Aaronson's blog and lectures

I first came across it via overcoming bias, and discussed it with a few people, but then recently saw it again in one of the transcriptions of scott aaronson's philosophy/quantum/computing lectures.

Newcomb's paradox

In very short, Newcomb's paradox says, suppose you're a professor and a grad student (or, in some cases, a superintelligent alien) comes to you and demonstrates this experiment. She chooses a volunteer, examines them, then takes two boxes, puts £1000 in box A and either £1000000 or nothing in the box B (see below for how she decides). She brings the boxes into the room and explains the set-up to the volunteer and says that they're allowed to either take the mystery box (when they either get lots or nothing) or take both boxes (when they get at least £1000.)

She even lets them see the £100000 beforehand so they know it exists, and lets them peek into box A to show it does have the money in, though box B remains a secret until afterwards.

ChoiceIn box AIn box BTotal obtained
B only£1000£0£0
B only£1000£1000000£1000000

"What's the catch," the volunteer asks. "Ah," begins the experimenter. "I have previously examined you, and worked out which choice you're going to make. If you were going to choose both boxes, I put nothing in box B. Only if you were going to take box B only, did I choose to put £1000000 in it.

"Hm", says the volunteer. "What do I do?"

Read more... )