jack: (Default)
[personal profile] jack
Akrasia

Is there a difference between "what I do" and "what I want to do"?

In fact, it looks a bit like a paradox. There's a very real way where want someone acts on is a better meaning for "what they want" than what they SAY they want. But also, we're all familiar with wanting to break a habit, and yet apparently being unable to do so.

There is a greek word, "Akrasia" (http://en.wikipedia.org/wiki/Akrasia) meaning "to act against your own best interests", where "best interests" is a bit subjective but we get the general idea. The concept has been adopted by many rationalist devotes/self improvers (http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/).

The idea is, there IS a difference between what we want immediately, and what we want longer term. It may be unfair to call long-term wants what we "really" want, and there's still a difference between "what we want" and "what would be most likely to make us happy if we got it", but they can be as valid wants as immediate wants are.

For instance, someone who really wants a cigarette, but really wants to give up smoking, may be in the position of choosing between immediate and longer-term wants.

When we took about someone having will-power, or someone being logical, what we really mean is someone who can weigh their immediate and long-term wants objectively, without automatically following emotions/instincts. (When we talk about someone who is OVER logical, we often mean someone who discounts their immediate pleasure too much.)

Is that an apt description of the difference?

Morality

Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality? I know people are prone to see a difference even when it isn't there, which makes me suspicious of anything I might suggest, but it's sensible to think about any proposals and not dismiss them out of hand. It may not be something other than what I want, but might it be a different type of what I want?

If we have a distinction between "wants for now-me" and "wants for future-me" I wonder if we could draw a similar distinction between "wants for me" and "wants for everyone else".

That is, is there a recognisable difference between "what I would enjoy" and "what I would like because it would make someone I like happy" and "what I feel I should do because someone would do it for me" and "what I should do for someone else because it's the right thing, even if no-one else thinks so", even if you can only infer what's going on in someone else's head?

I think there is, that people recognise a difference between "what they should do" and "what they'd like to do", and what they DO do is governed at a particular moment by where they currently fall on a scale between thinking "of course I'll do what I should do" and thinking "I'm overdue for something just for me". However, I'm not sure if I can actually test that or if it's just speculation.

With little indescretions, I think people do see a difference between "I know it's against the law, but I think it's ok" and "I know I shouldn't do this if I had infinite amounts of time and money to fix every world problem however small, but in the real world, there's no realistic way to avoid doing X". And I'm inclined to think that even people who do bigger bad things are probably thinking in the same way: "well, yeah, ideally I wouldn't've killed him/her, but you know, what can you do?" And morality for a person is something like "those things they think they would do in a magically perfect world where they could", somehow combined with what they prioritise when they put it into practice. But I don't know if that point of view is actually valid for other people or not.

Date: 2012-08-09 11:00 am (UTC)
ptc24: (Default)
From: [personal profile] ptc24
miswanting, over-rationality

There's an interesting experiment where people are asked to choose posters to have; one group just chooses, the other has to give reasons. The people that have to give reasons tend to choose different posters. Furthermore, the choices made by the reason-givers aren't as good; they tend to be less satisfied with their choices later on, still have to posters up on the wall, etc.

It's not all to do with time discounting. Also; compare the difference between exponential and hyperbolic discounting (look them up if you're unsure) with the difference between low and high discounting rates.

(Apparently the choice was between a Monet, a Van Gogh, and three different cute/comic photos of cats. Reason-givers tended to choose, then regret, the cats. Non-reason-givers tended to choose the Monet/Van Gogh and be pretty pleased with their choice later)

Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality?

Two issues here:

a) What do you mean by "objective" here (or, perhaps more pertinently, what do you mean by "not objective")? Suppose I have a concept of moral conduct; actions in keeping with that conduct could be described as ptc24-moral, actions not in keeping could be described as ptc24-immoral. This classification could hold even for actions that I'm not aware of, or actions that I am aware of, but am not morally evaluating, or not morally evaluating properly. For example, I could do something that was ptc24-immoral without realising, and only realise I was being a hypocrite when it was pointed out to me.

b) The slippery semantics of "I" and "want". If you imagine a person has more than one faculty for goal-directed thought and behaviour, then you could draw the boundaries of "I" and/or "want" to include one/some of those and exclude (an)other(s). Suppose part of me wants to eat a bag of crisps; the other wants to stick to weight-loss plans. I could identify with my hunger, or with my long-term goals, or I could identify "wanting" only with the short-term visceral hunger and not with the long-term "rational" diet desire. And then in some other situation I could re-construe "I" and/or "want" differently, according to the needs of the situation.

Date: 2012-08-09 02:32 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
An interesting exercise is to compare and contrast "moral" and "red". You see, counting as "red" depends on a lot of things to do with the human visual system, there are cultural whassnames with where you draw the boundaries with orange, pink and brown (or if you even have those categories), there are distinctions between red paper, red light, and the sensation of seeing red, there are things which can appear red in some contexts and not in others... It all points towards "red" not being a fundamental part of the universe, and certainly nothing you can get from first principles.

Did dinosaurs have red blood? Are there any red books on my bookshelf (I don't think anyone's at home) right now? Do these questions have answers with proper truth values?

Pedantry-with-added-crazy-hypothetical: suppose there was a unique, universal, derivable morality, suppose that deciding whether something is moral or not involves solving some big intractable NP-complete problem (possibly also knowing the exact position and momentum of every particle in the universe). Then there could be quite a lot of mystery about what counts as "moral".

Date: 2012-08-09 04:04 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
light of frequency in [this range] or [that range]

Oh, it's far more complicated than that, there's intensity, and which other frequencies are also present. Consider grey. Imagine trying to display a grey rectangle, and nothing else, using a data projector pointed at a white surface. Now consider that grey rectangle as part of a standard computer desktop. This is why I mentioned pink and brown.

only singled out as a relevant concept worth paying attention to because we happen to perceieve it directly

This is a nice turn of phrase, but I'm a bit concerned about "perceive it directly". Does it matter that the fact that I have red, green and blue cones isn't something that is obvious to me, but that I have to learn it by science? That without science, I don't know that light is the sort of thing to have a wavelength? That various different spectra could produce the same physiological response? That what's available to conciousness, language etc. has been heavily pre-processed? If I stand too close to an exploding atom bomb, and get obliterated just as the signals from the first light of the blast are halfway along my optic nerve, did I perceive the light of the blast or not (well, see, at any rate)? Does it matter that young children don't need to know all this stuff in order to use "red" correctly, that they can learn "red" earlier than the language needed to discuss this stuff?

Still, apart from that, good thought.

Which may be what you meant :)

Well, it wasn't what was originally intended, but I did think between writing and posting, "now I mention it, some things are a bit like that".

Date: 2012-08-10 12:48 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
(a bit of a recap from the pub... maybe it's better to let this drop, maybe it's interesting).

As well as light there are surfaces, and other things. Consider the light being given off by a bit of red card in a dimly-lit room, consider the same bit of card in direct sunlight on a bright day. Other questions - is the blood that's in my heart right now red or black or colourless? Where on your computer screen is it red right now?

You've seen this optical illusion, where two greys that are the same shade light-intensity-wise look different - our visual system seems to be running some heuristic to recover the properties of the surface, not just of the light, and what we experience is a some way a reconstruction of that surface. Of course, what we're really looking at here is a computer screen, which is why I say "heuristic". It's remarkable how people can talk about optical illusions and share their experiences. Also, optical illusions don't disappear when you know about them, and neither can they be willed out of existence; at any rate, not trivially.

Aaanyway... the quibbling over "objective" was to make a point: I think... the category "red" exists because of the nature of subjective experience - and attempting to fully define that category is consequently a very hard job (pointing to it, OTOH, is easy) - it's not something you can just stipulate - but many of the things that fall into that category do so because of mind-independent objective properties in the external world; the way we talk about things, we don't have to experience things as red for them to count as red, I can be wrong about whether something is red or not.

As I said in the pub, I think morals are more complicated than that.

Date: 2012-08-09 02:38 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
choose to define: I think this is less about stipulating definitions, and more about reverse-engineering neural networks. When I say to myself "I don't want to go to work today", I'm not thinking about definitions; that only happens when I'm trying to analyse that thought.

Date: 2012-08-09 11:28 am (UTC)
From: [identity profile] eudoxiafriday.wordpress.com
This reminds me of Paul writing in Romans 7:15 ("I do not understand what I do. For what I want to do I do not do, but what I hate I do"). It also reminds me of diet books, specifically "French Women don't get fat", which posits that one has an inner person who wants tasty things NOW and an inner person who wants long-term slimness and the key is to make friends of these two people (e.g. by realising that novelty also makes the short-term you happy, and that there are lots of tasty things and you can have a variety of them and some of them are chocolate and some of them are blueberries).

Developing this theme, I think that there certainly *can* be a difference between wants for now-me and wants for future-me, but that with deliberate intent these two can converge more and more, and furthermore that this convergence is desirable because it minimises the waste of energy spent being conflicted.

For example, suppose I have a work email I don't want to send, not so urgent that I couldn't put it off for a couple of days, but really something I know I ought to get on with.

I don't want to write/send it, but simultaneously, I (both now-me and future-me) want to be the sort of person who gets on with things and bloody well does it and gets it over with. I (now-me) will then do a quick check of anything else that's relevant(e.g. am I in a reasonable frame of mind to send it? if I have just had a massive row with someone, I should probably avoid it in case that leaks through) but assuming that passes I will get on with it.

This makes now-me simultaneously unhappy (because it's unpleasant) and happy (because I am showing that I am being the sort of person I want to be), and makes future-me doubly happy (because it's over and done, and I have been the sort of person I want to be). This is much better than putting it off which makes now-me a little unhappy (because I'm not doing it, which is neutral as it's not actively pleasant, and I'm not being the sort of person I want to be which makes me unhappy) and leaves future-me in the position of still having to deal with it later.

So I guess my argument is that although in some sense there's a difference between one's immediate wants/instincts/emotions and one's future wants, these can be brought to alignment through beliefs about identity and self? Or something along those lines ... after all, future-me must bear some resemblance to now-me, or they wouldn't both be me.