Akrasia and Morality
Aug. 9th, 2012 10:19 amAkrasia
Is there a difference between "what I do" and "what I want to do"?
In fact, it looks a bit like a paradox. There's a very real way where want someone acts on is a better meaning for "what they want" than what they SAY they want. But also, we're all familiar with wanting to break a habit, and yet apparently being unable to do so.
There is a greek word, "Akrasia" (http://en.wikipedia.org/wiki/Akrasia) meaning "to act against your own best interests", where "best interests" is a bit subjective but we get the general idea. The concept has been adopted by many rationalist devotes/self improvers (http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/).
The idea is, there IS a difference between what we want immediately, and what we want longer term. It may be unfair to call long-term wants what we "really" want, and there's still a difference between "what we want" and "what would be most likely to make us happy if we got it", but they can be as valid wants as immediate wants are.
For instance, someone who really wants a cigarette, but really wants to give up smoking, may be in the position of choosing between immediate and longer-term wants.
When we took about someone having will-power, or someone being logical, what we really mean is someone who can weigh their immediate and long-term wants objectively, without automatically following emotions/instincts. (When we talk about someone who is OVER logical, we often mean someone who discounts their immediate pleasure too much.)
Is that an apt description of the difference?
Morality
Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality? I know people are prone to see a difference even when it isn't there, which makes me suspicious of anything I might suggest, but it's sensible to think about any proposals and not dismiss them out of hand. It may not be something other than what I want, but might it be a different type of what I want?
If we have a distinction between "wants for now-me" and "wants for future-me" I wonder if we could draw a similar distinction between "wants for me" and "wants for everyone else".
That is, is there a recognisable difference between "what I would enjoy" and "what I would like because it would make someone I like happy" and "what I feel I should do because someone would do it for me" and "what I should do for someone else because it's the right thing, even if no-one else thinks so", even if you can only infer what's going on in someone else's head?
I think there is, that people recognise a difference between "what they should do" and "what they'd like to do", and what they DO do is governed at a particular moment by where they currently fall on a scale between thinking "of course I'll do what I should do" and thinking "I'm overdue for something just for me". However, I'm not sure if I can actually test that or if it's just speculation.
With little indescretions, I think people do see a difference between "I know it's against the law, but I think it's ok" and "I know I shouldn't do this if I had infinite amounts of time and money to fix every world problem however small, but in the real world, there's no realistic way to avoid doing X". And I'm inclined to think that even people who do bigger bad things are probably thinking in the same way: "well, yeah, ideally I wouldn't've killed him/her, but you know, what can you do?" And morality for a person is something like "those things they think they would do in a magically perfect world where they could", somehow combined with what they prioritise when they put it into practice. But I don't know if that point of view is actually valid for other people or not.
Is there a difference between "what I do" and "what I want to do"?
In fact, it looks a bit like a paradox. There's a very real way where want someone acts on is a better meaning for "what they want" than what they SAY they want. But also, we're all familiar with wanting to break a habit, and yet apparently being unable to do so.
There is a greek word, "Akrasia" (http://en.wikipedia.org/wiki/Akrasia) meaning "to act against your own best interests", where "best interests" is a bit subjective but we get the general idea. The concept has been adopted by many rationalist devotes/self improvers (http://lesswrong.com/lw/h7/selfdeception_hypocrisy_or_akrasia/).
The idea is, there IS a difference between what we want immediately, and what we want longer term. It may be unfair to call long-term wants what we "really" want, and there's still a difference between "what we want" and "what would be most likely to make us happy if we got it", but they can be as valid wants as immediate wants are.
For instance, someone who really wants a cigarette, but really wants to give up smoking, may be in the position of choosing between immediate and longer-term wants.
When we took about someone having will-power, or someone being logical, what we really mean is someone who can weigh their immediate and long-term wants objectively, without automatically following emotions/instincts. (When we talk about someone who is OVER logical, we often mean someone who discounts their immediate pleasure too much.)
Is that an apt description of the difference?
Morality
Is there a difference between "a moral action" and "an action I want someone to do", without an objective standard of morality? I know people are prone to see a difference even when it isn't there, which makes me suspicious of anything I might suggest, but it's sensible to think about any proposals and not dismiss them out of hand. It may not be something other than what I want, but might it be a different type of what I want?
If we have a distinction between "wants for now-me" and "wants for future-me" I wonder if we could draw a similar distinction between "wants for me" and "wants for everyone else".
That is, is there a recognisable difference between "what I would enjoy" and "what I would like because it would make someone I like happy" and "what I feel I should do because someone would do it for me" and "what I should do for someone else because it's the right thing, even if no-one else thinks so", even if you can only infer what's going on in someone else's head?
I think there is, that people recognise a difference between "what they should do" and "what they'd like to do", and what they DO do is governed at a particular moment by where they currently fall on a scale between thinking "of course I'll do what I should do" and thinking "I'm overdue for something just for me". However, I'm not sure if I can actually test that or if it's just speculation.
With little indescretions, I think people do see a difference between "I know it's against the law, but I think it's ok" and "I know I shouldn't do this if I had infinite amounts of time and money to fix every world problem however small, but in the real world, there's no realistic way to avoid doing X". And I'm inclined to think that even people who do bigger bad things are probably thinking in the same way: "well, yeah, ideally I wouldn't've killed him/her, but you know, what can you do?" And morality for a person is something like "those things they think they would do in a magically perfect world where they could", somehow combined with what they prioritise when they put it into practice. But I don't know if that point of view is actually valid for other people or not.
no subject
Date: 2012-08-09 03:23 pm (UTC)I agree there are concepts like "light of frequency in [this range] or [that range]" which can be determined independantly of human perception, but is only singled out as a relevant concept worth paying attention to because we happen to perceieve it directly.
suppose that deciding whether something is moral or not involves solving some big intractable NP-complete problem
To some extent that is what we DO have, assuming people agree more-or-less on principles like "people having jobs, income, food, shelter, etc is good" but disagree about economic policies will bring that about. Which may be what you meant :)
no subject
Date: 2012-08-09 04:04 pm (UTC)Oh, it's far more complicated than that, there's intensity, and which other frequencies are also present. Consider grey. Imagine trying to display a grey rectangle, and nothing else, using a data projector pointed at a white surface. Now consider that grey rectangle as part of a standard computer desktop. This is why I mentioned pink and brown.
only singled out as a relevant concept worth paying attention to because we happen to perceieve it directly
This is a nice turn of phrase, but I'm a bit concerned about "perceive it directly". Does it matter that the fact that I have red, green and blue cones isn't something that is obvious to me, but that I have to learn it by science? That without science, I don't know that light is the sort of thing to have a wavelength? That various different spectra could produce the same physiological response? That what's available to conciousness, language etc. has been heavily pre-processed? If I stand too close to an exploding atom bomb, and get obliterated just as the signals from the first light of the blast are halfway along my optic nerve, did I perceive the light of the blast or not (well, see, at any rate)? Does it matter that young children don't need to know all this stuff in order to use "red" correctly, that they can learn "red" earlier than the language needed to discuss this stuff?
Still, apart from that, good thought.
Which may be what you meant :)
Well, it wasn't what was originally intended, but I did think between writing and posting, "now I mention it, some things are a bit like that".
no subject
Date: 2012-08-09 04:38 pm (UTC)OK, I'm sorry, I assumed that we both shared an understanding of what sort of complexities would go into a definition of "red" and I could give the general idea in half a sentence without spending fifty paragraphs explaining the intricacies with footnotes. But apparently that totally backfired.
It is my contention that what exact parts of colourspace count as "red" is a fuzzy language issue like what exactly counts as "table", but that the mapping from physical state of the photons to the colorspace is, while complicated, and arbitrary to anyone who didn't evolve the same way we did, in principle deterministic.
I apologise for trivialising the mapping: I agree it's complicated and you need to know more optics than me to specify it exactly. But do you agree that it is in principle deterministic even if arbitrary?
This is a nice turn of phrase
Thank you.
but I'm a bit concerned about "perceive it directly".
Good question. I think I made the mistake of conflating the sensation of "red" with the type of photons that are described as "red".
It seems like there's three parts (so far) to the question of what counts as "red":
1. The inherent ambiguity in language, the same as "does this count as a table".
2. The unusual complexity in what sorts of collections of photons humans mostly count as red (more complicated than "which waves in air count as sound").
3. Variation in what causes humans to have a sensation similar to that when they perceive photons as in #2 eg. people with synaesthesia may see "red" at other times.
It is common to use "red" to mean either "the sensation as described in #3" or "collections of photons which most humans agree produce a consistent sort of sensation, as in #2 (even if they can't know whether they all experience the SAME sensation)".
The third sense is only really meaningful to one person, we don't yet have any way of [1] of comparing them between people, let alone saying "if a human were looking at a red dinosaur, would they experience the same sensation as looking at a red something else"? (Probable, but untestable.)
However, in the second sense, it seems equally reasonable to say "the dinosaur is red" as "the dinosaur made a sound", so long as the photons or air molecules are of the sort that humans have labelled, even if there are no humans around to do the labelling.
The examples you gave led me to think you were talking about the second sort, where colour is an especially prominent example.
I apologise for "perceive it directly"; perhaps "perceive" is a better description. I agree that the conversation is meaningless if you drag in the third sort of distinction. I only intended to say, different humans tend to share a response of "these are the same colour" and "these are not" that causes them to identify the concept "red", while not identifying other combinations of bands of colour.
[1] With possible exceptions such as telling a blind person "the blue wall looks a bit like a mountain stream feels, the red wall looks a bit like direct sunshine feels", etc. I think those are probably cultural, but there may be some universals in, maybe...
no subject
Date: 2012-08-10 12:48 pm (UTC)As well as light there are surfaces, and other things. Consider the light being given off by a bit of red card in a dimly-lit room, consider the same bit of card in direct sunlight on a bright day. Other questions - is the blood that's in my heart right now red or black or colourless? Where on your computer screen is it red right now?
You've seen this optical illusion, where two greys that are the same shade light-intensity-wise look different - our visual system seems to be running some heuristic to recover the properties of the surface, not just of the light, and what we experience is a some way a reconstruction of that surface. Of course, what we're really looking at here is a computer screen, which is why I say "heuristic". It's remarkable how people can talk about optical illusions and share their experiences. Also, optical illusions don't disappear when you know about them, and neither can they be willed out of existence; at any rate, not trivially.
Aaanyway... the quibbling over "objective" was to make a point: I think... the category "red" exists because of the nature of subjective experience - and attempting to fully define that category is consequently a very hard job (pointing to it, OTOH, is easy) - it's not something you can just stipulate - but many of the things that fall into that category do so because of mind-independent objective properties in the external world; the way we talk about things, we don't have to experience things as red for them to count as red, I can be wrong about whether something is red or not.
As I said in the pub, I think morals are more complicated than that.