jack: (Default)
[personal profile] jack
Diax's Rake

Diax's Rake says "Don't believe something simply because you want it to be true". It's from Anathem -- I'm not sure if there's a real-world name?

It sounds obvious, but in fact I keep coming across it in contexts where I hadn't realised "believing in something because I wanted to" was what people were doing.

For instance, the most common argument that there's an absolute standard of morality seems to be "But if we didn't, it would be really terrible. Blah blah blah Hitler." But that seems to be an argument for why it's not desirable to live in that world, but it offers no reason other than sheer optimism to think that we do live in that world.

But another case seems to be free will. Why do people think we have free will. It seems like the most common argument is "But if we didn't, it would be terrible! Our lives would be pointless, and we wouldn't be able to philosophically justify prison sentences." But again, that seems to be "we WANT to have free will", not "here's a reason to think it's LIKELY we have free will".

Free Will

However, that's somewhat misleading. I feel like at some point society started toying with the idea that "have free will" or "not have free will" made no seriously falsifiable assertions, even in principle.

At which point, some people said "Look, our future actions are basically predetermined by the physics of our minds. 'free will' is basically a meaningless concept."

And other said, "No, wait. Look at what we associate with 'free will': rights, responsibilities, choices, law, etc, etc. We do have all that, we don't care if it's predetermined or not. I think 'not having free will' is basically a meaningless concept."

And the thing is, THEY'RE BOTH RIGHT. "free will" being meaningless and "not having free will" being meaningless are exactly the same statement, they just SOUND like they're opposed. They're somewhat opposed: they agree how the world works, but disagree whether "free will" is an appropriate description to use to describe it.

And arguing about "should we use this word or not" is almost always pointless, with people regressing to assuming that they're still arguing for the concept they used to assocaite with the word, without recognising that the other people don't disagree, they're just doing the same thing.

Many people who know more about philosophy than me seem to be self-defining as compatibilists (the idea that free will and determinism aren't contradictory?) If someone says they're compatibilist, I generally find I completely agree with how they say the universe works. But I don't understand the assertion that free will exists. Is there a basis for that? It's not just pandering to people who have a really intense intuition that free will is a well-defined concept that exists, at the expense of alienating people who at some point because convinced it doesn't?

Date: 2012-06-19 08:14 am (UTC)
ptc24: (Default)
From: [personal profile] ptc24
I think there's a useful analogy (maybe it's more literal than that) with NP-completeness here. It may only be a coincidence that the N stands for "nondeterministic".

I think there are some good diagonalisation-style arguments for us having limited abilities to predict ourselves and our peers[1]; in order to imagine it and preserve determinism, you'd have to imagine some implausible coercive force that prevented us from acting on our predictions; like people in a consistent-history time travel story who have seen the future and can't quite bring themselves to change it for some unfathomable reason. We can't transcend those limits on prediction by adding more computational power, because the extra computational power would make us harder to predict.

[1] If you could predict your peers, you could predict them predicting you.

Date: 2012-06-19 10:46 am (UTC)
ptc24: (Default)
From: [personal profile] ptc24
It seems that interaction is required, too, to remove free will, if you try defining it this way. If they can predict you, they can manipulate you; if they're rational beings (according to some concepts of rationality) it's very hard for them not to manipulate you.

Possibly there might conceivably be free-will preserving ethical codes for agents with strong knowledge, that let them interact with you without destroying your free will.

I once came up with this idea of playing Go vs God, and playing vs the devil (apparently in the Go scene it's the done thing to think about divine opponents, although possibly with a less Judeo-Christian concept of the divine). In the analogy, God will treat you with courtesy, playing as if you were a perfect being who could implement the minimax algorithm to the end of the game. The Devil, OTOH, would predict what you would actually do, and play trick moves in order to take advantage of your weaknesses. You'd need a stronger handicap against the Devil than against God.