Akrasia and Morality
Aug. 9th, 2012 10:19 amAkrasia
Is there a difference between "what I do" and "what I want to do"?
In fact, it looks a bit like a paradox. There's a very real way where want someone acts on is a better meaning for "what they want" than what they SAY they want. But also, we're all familiar with wanting to break a habit, and yet apparently being unable to do so.
There is a greek word, "Akrasia" (http://en.wikipedia.org/wiki/Akrasia) meaning "to act against your own best interests", where "best interests" is a bit subjective but we get the general idea. The concept has been adopted by many rationalist devotes/self improvers (http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/).
The idea is, there IS a difference between what we want immediately, and what we want longer term. It may be unfair to call long-term wants what we "really" want, and there's still a difference between "what we want" and "what would be most likely to make us happy if we got it", but they can be as valid wants as immediate wants are.
For instance, someone who really wants a cigarette, but really wants to give up smoking, may be in the position of choosing between immediate and longer-term wants.
When we took about someone having will-power, or someone being logical, what we really mean is someone who can weigh their immediate and long-term wants objectively, without automatically following emotions/instincts. (When we talk about someone who is OVER logical, we often mean someone who discounts their immediate pleasure too much.)
Is that an apt description of the difference?
Morality
Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality? I know people are prone to see a difference even when it isn't there, which makes me suspicious of anything I might suggest, but it's sensible to think about any proposals and not dismiss them out of hand. It may not be something other than what I want, but might it be a different type of what I want?
If we have a distinction between "wants for now-me" and "wants for future-me" I wonder if we could draw a similar distinction between "wants for me" and "wants for everyone else".
That is, is there a recognisable difference between "what I would enjoy" and "what I would like because it would make someone I like happy" and "what I feel I should do because someone would do it for me" and "what I should do for someone else because it's the right thing, even if no-one else thinks so", even if you can only infer what's going on in someone else's head?
I think there is, that people recognise a difference between "what they should do" and "what they'd like to do", and what they DO do is governed at a particular moment by where they currently fall on a scale between thinking "of course I'll do what I should do" and thinking "I'm overdue for something just for me". However, I'm not sure if I can actually test that or if it's just speculation.
With little indescretions, I think people do see a difference between "I know it's against the law, but I think it's ok" and "I know I shouldn't do this if I had infinite amounts of time and money to fix every world problem however small, but in the real world, there's no realistic way to avoid doing X". And I'm inclined to think that even people who do bigger bad things are probably thinking in the same way: "well, yeah, ideally I wouldn't've killed him/her, but you know, what can you do?" And morality for a person is something like "those things they think they would do in a magically perfect world where they could", somehow combined with what they prioritise when they put it into practice. But I don't know if that point of view is actually valid for other people or not.
Is there a difference between "what I do" and "what I want to do"?
In fact, it looks a bit like a paradox. There's a very real way where want someone acts on is a better meaning for "what they want" than what they SAY they want. But also, we're all familiar with wanting to break a habit, and yet apparently being unable to do so.
There is a greek word, "Akrasia" (http://en.wikipedia.org/wiki/Akrasia) meaning "to act against your own best interests", where "best interests" is a bit subjective but we get the general idea. The concept has been adopted by many rationalist devotes/self improvers (http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/).
The idea is, there IS a difference between what we want immediately, and what we want longer term. It may be unfair to call long-term wants what we "really" want, and there's still a difference between "what we want" and "what would be most likely to make us happy if we got it", but they can be as valid wants as immediate wants are.
For instance, someone who really wants a cigarette, but really wants to give up smoking, may be in the position of choosing between immediate and longer-term wants.
When we took about someone having will-power, or someone being logical, what we really mean is someone who can weigh their immediate and long-term wants objectively, without automatically following emotions/instincts. (When we talk about someone who is OVER logical, we often mean someone who discounts their immediate pleasure too much.)
Is that an apt description of the difference?
Morality
Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality? I know people are prone to see a difference even when it isn't there, which makes me suspicious of anything I might suggest, but it's sensible to think about any proposals and not dismiss them out of hand. It may not be something other than what I want, but might it be a different type of what I want?
If we have a distinction between "wants for now-me" and "wants for future-me" I wonder if we could draw a similar distinction between "wants for me" and "wants for everyone else".
That is, is there a recognisable difference between "what I would enjoy" and "what I would like because it would make someone I like happy" and "what I feel I should do because someone would do it for me" and "what I should do for someone else because it's the right thing, even if no-one else thinks so", even if you can only infer what's going on in someone else's head?
I think there is, that people recognise a difference between "what they should do" and "what they'd like to do", and what they DO do is governed at a particular moment by where they currently fall on a scale between thinking "of course I'll do what I should do" and thinking "I'm overdue for something just for me". However, I'm not sure if I can actually test that or if it's just speculation.
With little indescretions, I think people do see a difference between "I know it's against the law, but I think it's ok" and "I know I shouldn't do this if I had infinite amounts of time and money to fix every world problem however small, but in the real world, there's no realistic way to avoid doing X". And I'm inclined to think that even people who do bigger bad things are probably thinking in the same way: "well, yeah, ideally I wouldn't've killed him/her, but you know, what can you do?" And morality for a person is something like "those things they think they would do in a magically perfect world where they could", somehow combined with what they prioritise when they put it into practice. But I don't know if that point of view is actually valid for other people or not.
no subject
Date: 2012-08-09 11:00 am (UTC)There's an interesting experiment where people are asked to choose posters to have; one group just chooses, the other has to give reasons. The people that have to give reasons tend to choose different posters. Furthermore, the choices made by the reason-givers aren't as good; they tend to be less satisfied with their choices later on, still have to posters up on the wall, etc.
It's not all to do with time discounting. Also; compare the difference between exponential and hyperbolic discounting (look them up if you're unsure) with the difference between low and high discounting rates.
(Apparently the choice was between a Monet, a Van Gogh, and three different cute/comic photos of cats. Reason-givers tended to choose, then regret, the cats. Non-reason-givers tended to choose the Monet/Van Gogh and be pretty pleased with their choice later)
Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality?
Two issues here:
a) What do you mean by "objective" here (or, perhaps more pertinently, what do you mean by "not objective")? Suppose I have a concept of moral conduct; actions in keeping with that conduct could be described as ptc24-moral, actions not in keeping could be described as ptc24-immoral. This classification could hold even for actions that I'm not aware of, or actions that I am aware of, but am not morally evaluating, or not morally evaluating properly. For example, I could do something that was ptc24-immoral without realising, and only realise I was being a hypocrite when it was pointed out to me.
b) The slippery semantics of "I" and "want". If you imagine a person has more than one faculty for goal-directed thought and behaviour, then you could draw the boundaries of "I" and/or "want" to include one/some of those and exclude (an)other(s). Suppose part of me wants to eat a bag of crisps; the other wants to stick to weight-loss plans. I could identify with my hunger, or with my long-term goals, or I could identify "wanting" only with the short-term visceral hunger and not with the long-term "rational" diet desire. And then in some other situation I could re-construe "I" and/or "want" differently, according to the needs of the situation.
no subject
Date: 2012-08-09 01:35 pm (UTC)Possibly "objective" is the wrong word. I mean, if you think there's a unique, universal morality which everyone can deduce from first principles if you try hard enough, then there's no mystery about what counts as "moral".
I am attempting to address myself and the majority of my friends who DON'T currently believe that.
There are other meanings of "objective" (eg. can I codify what I think is moral so someone else could judge it, even if they didn't agree) that are relevant, but not what I meant there.
no subject
Date: 2012-08-09 02:32 pm (UTC)Did dinosaurs have red blood? Are there any red books on my bookshelf (I don't think anyone's at home) right now? Do these questions have answers with proper truth values?
Pedantry-with-added-crazy-hypothetical: suppose there was a unique, universal, derivable morality, suppose that deciding whether something is moral or not involves solving some big intractable NP-complete problem (possibly also knowing the exact position and momentum of every particle in the universe). Then there could be quite a lot of mystery about what counts as "moral".
no subject
Date: 2012-08-09 03:23 pm (UTC)I agree there are concepts like "light of frequency in [this range] or [that range]" which can be determined independantly of human perception, but is only singled out as a relevant concept worth paying attention to because we happen to perceieve it directly.
suppose that deciding whether something is moral or not involves solving some big intractable NP-complete problem
To some extent that is what we DO have, assuming people agree more-or-less on principles like "people having jobs, income, food, shelter, etc is good" but disagree about economic policies will bring that about. Which may be what you meant :)
no subject
Date: 2012-08-09 04:04 pm (UTC)Oh, it's far more complicated than that, there's intensity, and which other frequencies are also present. Consider grey. Imagine trying to display a grey rectangle, and nothing else, using a data projector pointed at a white surface. Now consider that grey rectangle as part of a standard computer desktop. This is why I mentioned pink and brown.
only singled out as a relevant concept worth paying attention to because we happen to perceieve it directly
This is a nice turn of phrase, but I'm a bit concerned about "perceive it directly". Does it matter that the fact that I have red, green and blue cones isn't something that is obvious to me, but that I have to learn it by science? That without science, I don't know that light is the sort of thing to have a wavelength? That various different spectra could produce the same physiological response? That what's available to conciousness, language etc. has been heavily pre-processed? If I stand too close to an exploding atom bomb, and get obliterated just as the signals from the first light of the blast are halfway along my optic nerve, did I perceive the light of the blast or not (well, see, at any rate)? Does it matter that young children don't need to know all this stuff in order to use "red" correctly, that they can learn "red" earlier than the language needed to discuss this stuff?
Still, apart from that, good thought.
Which may be what you meant :)
Well, it wasn't what was originally intended, but I did think between writing and posting, "now I mention it, some things are a bit like that".
no subject
Date: 2012-08-09 04:38 pm (UTC)OK, I'm sorry, I assumed that we both shared an understanding of what sort of complexities would go into a definition of "red" and I could give the general idea in half a sentence without spending fifty paragraphs explaining the intricacies with footnotes. But apparently that totally backfired.
It is my contention that what exact parts of colourspace count as "red" is a fuzzy language issue like what exactly counts as "table", but that the mapping from physical state of the photons to the colorspace is, while complicated, and arbitrary to anyone who didn't evolve the same way we did, in principle deterministic.
I apologise for trivialising the mapping: I agree it's complicated and you need to know more optics than me to specify it exactly. But do you agree that it is in principle deterministic even if arbitrary?
This is a nice turn of phrase
Thank you.
but I'm a bit concerned about "perceive it directly".
Good question. I think I made the mistake of conflating the sensation of "red" with the type of photons that are described as "red".
It seems like there's three parts (so far) to the question of what counts as "red":
1. The inherent ambiguity in language, the same as "does this count as a table".
2. The unusual complexity in what sorts of collections of photons humans mostly count as red (more complicated than "which waves in air count as sound").
3. Variation in what causes humans to have a sensation similar to that when they perceive photons as in #2 eg. people with synaesthesia may see "red" at other times.
It is common to use "red" to mean either "the sensation as described in #3" or "collections of photons which most humans agree produce a consistent sort of sensation, as in #2 (even if they can't know whether they all experience the SAME sensation)".
The third sense is only really meaningful to one person, we don't yet have any way of [1] of comparing them between people, let alone saying "if a human were looking at a red dinosaur, would they experience the same sensation as looking at a red something else"? (Probable, but untestable.)
However, in the second sense, it seems equally reasonable to say "the dinosaur is red" as "the dinosaur made a sound", so long as the photons or air molecules are of the sort that humans have labelled, even if there are no humans around to do the labelling.
The examples you gave led me to think you were talking about the second sort, where colour is an especially prominent example.
I apologise for "perceive it directly"; perhaps "perceive" is a better description. I agree that the conversation is meaningless if you drag in the third sort of distinction. I only intended to say, different humans tend to share a response of "these are the same colour" and "these are not" that causes them to identify the concept "red", while not identifying other combinations of bands of colour.
[1] With possible exceptions such as telling a blind person "the blue wall looks a bit like a mountain stream feels, the red wall looks a bit like direct sunshine feels", etc. I think those are probably cultural, but there may be some universals in, maybe...
no subject
Date: 2012-08-10 12:48 pm (UTC)As well as light there are surfaces, and other things. Consider the light being given off by a bit of red card in a dimly-lit room, consider the same bit of card in direct sunlight on a bright day. Other questions - is the blood that's in my heart right now red or black or colourless? Where on your computer screen is it red right now?
You've seen this optical illusion, where two greys that are the same shade light-intensity-wise look different - our visual system seems to be running some heuristic to recover the properties of the surface, not just of the light, and what we experience is a some way a reconstruction of that surface. Of course, what we're really looking at here is a computer screen, which is why I say "heuristic". It's remarkable how people can talk about optical illusions and share their experiences. Also, optical illusions don't disappear when you know about them, and neither can they be willed out of existence; at any rate, not trivially.
Aaanyway... the quibbling over "objective" was to make a point: I think... the category "red" exists because of the nature of subjective experience - and attempting to fully define that category is consequently a very hard job (pointing to it, OTOH, is easy) - it's not something you can just stipulate - but many of the things that fall into that category do so because of mind-independent objective properties in the external world; the way we talk about things, we don't have to experience things as red for them to count as red, I can be wrong about whether something is red or not.
As I said in the pub, I think morals are more complicated than that.
no subject
Date: 2012-08-09 01:43 pm (UTC)Excellent point. Yes, I think I was being inclusive by identifying all the relevant desires as "I want", but you can very much choose to define "I" to include, or exclude some of them.
no subject
Date: 2012-08-09 02:38 pm (UTC)no subject
Date: 2012-08-09 11:28 am (UTC)Developing this theme, I think that there certainly *can* be a difference between wants for now-me and wants for future-me, but that with deliberate intent these two can converge more and more, and furthermore that this convergence is desirable because it minimises the waste of energy spent being conflicted.
For example, suppose I have a work email I don't want to send, not so urgent that I couldn't put it off for a couple of days, but really something I know I ought to get on with.
I don't want to write/send it, but simultaneously, I (both now-me and future-me) want to be the sort of person who gets on with things and bloody well does it and gets it over with. I (now-me) will then do a quick check of anything else that's relevant(e.g. am I in a reasonable frame of mind to send it? if I have just had a massive row with someone, I should probably avoid it in case that leaks through) but assuming that passes I will get on with it.
This makes now-me simultaneously unhappy (because it's unpleasant) and happy (because I am showing that I am being the sort of person I want to be), and makes future-me doubly happy (because it's over and done, and I have been the sort of person I want to be). This is much better than putting it off which makes now-me a little unhappy (because I'm not doing it, which is neutral as it's not actively pleasant, and I'm not being the sort of person I want to be which makes me unhappy) and leaves future-me in the position of still having to deal with it later.
So I guess my argument is that although in some sense there's a difference between one's immediate wants/instincts/emotions and one's future wants, these can be brought to alignment through beliefs about identity and self? Or something along those lines ... after all, future-me must bear some resemblance to now-me, or they wouldn't both be me.
no subject
Date: 2012-08-09 03:41 pm (UTC)I have another post brewing about how much "what we have an urge to do" corresponds to "what will make us happy if we do". For, say, bacteria, it's obvious that's a rough approximation. But our brains are so good at it, we start to take it for granted and assume it MUST be the case, when actually, it's only a constant (but effective) balancing act.
And that can go wrong with something like addicition, when something has a nearly 100% urge to do, but has no benefit, or the only benefit is a temporary postponing of the addiction.
I think harmonising "what we have an urge to do" and "what will make us short term happy" and "what will make us long term happy" is not automatic, but is something we aspire towards.