jack: (Default)
[personal profile] jack
https://en.wikipedia.org/wiki/Ludic_fallacy

I don't entirely like the way the original of this is slanted, but I found the examples really useful. Imagine Doctor John in naturally inclined to intellectual, abstract thinking, and Fat Tony is a streetwise hustler.

If you ask, "I have a fair coin, and I toss it twenty times and get heads every time, what is the chance that it comes up tails the next time?"

Doctor John says, "It doesn't affect the odds, a fair coin is still 50/50."

Fat Tony says, "I don't care what you said, the coin is clearly biased."

The point I want to make is that BOTH of those forms of thinking are really useful. Doctor John's way of thinking is really useful whenever you want to consider something unlikely that might be true. We KNOW general relativity and quantum mechanics are true, but our common sense screams at us that they can't be. You need to consider a specific, clear but bizarre premise to be able to work out what it would imply. And I think the same applies to some every day situations as well.

Whereas, Fat Tony's way of thinking is really useful whenever your premises are not 100% certain (eg. all the time in real life). If the premises are certain enough it may be worth taking them at face value, but it's always worth considering at what point "this is sufficiently out the normal range of the premises we need to re-evaluate if they still hold".

Examples

It occurred to me, one instance this often shows up is in tests supposed to measure intelligence or abstract reasoning, that involve accepting arbitrary and unrealistic premises and reasoning based on those. That's a skill, a sometimes useful skill, but if you just drop that on someone cold, you're in a large part measuring "do you come from a culture/subculture where that skill is respected, taught and expected".

For that matter, the same applies in reverse, if you're looking for someone who can use common sense and avoid getting caught in bad premises, someone may not be primed to do that by default, but be perfectly capable of picking it up if they practice a bit.

The example originally came from a book about financial crashes, where people say "but our model of the economy says this is impossible". And I agree, that's a fault that happens when Doctor Johns get self-obsessed and refuse to re-evaluate their premises ever. But that's not all Doctor Johns. And Fat Tonys make mistakes too -- eg. flat earthers. Ideally you're good at _both_.

Bayesian reasoning

In some ways, both people are wedded to their premises, it's just that Doctor John is wedded to the premise that "I should follow the rules given to me" and Fat Tony is wedded to the premise "everything will mostly fit my prior experience".

Actual reasoning might explicitly numerically weigh up, how likely are those premises to be true? Which is more likely, "The evidence the coin was fair was flawed" or "A fair coin came up heads 20 times in a row." And depending how sure that evidence is, it could go either way. If you tested the coin yourself, and verify it still has two sides, and you're in a room with it by yourself, the chance you're wrong is really low, less than 1/1000000. If not, it's probably more likely someone's cheating for some reason, even if you don't know why they would think that's a good idea.

However, you often don't even need to compute the probability. Simply asking, "which is more likely, the premises are flawed, or the outcome worked out like this" often makes the answer intuitively obvious, at least to the level of "probably A", "probably B", or "could be either".

This is how I interpret Sherlock Holmes' "When you've eliminated the impossible, whatever remains, however improbable, must be the truth" and Douglas Adam's "The impossible often has a kind of integrity to it which the merely improbable lacks". I interpret the first quote, is referring to things you've specifically checked, and decided are really really impossible. Then one of your other assumptions, even if unlikely, must have been broken. Whereas the second quote, I interpret as, you _assumed_ things were impossible, but what remains is SO improbable, it's more likely that you were wrong about what you thought was impossible (eg. something really unlikely but possible undermined your assumptions, a cosmic ray hit a computer, or someone surprisingly has an identical twin sibling you didn't know about, or you had a really realistic dream...).

My life

I notice this in debugging. I have a bug, apparently in component A, B or C. I spend a certain amount of time looking for it, and at some point, decide, maybe my evidence that it was there was flawed. Maybe it was human error, check it still happens (fairly soon). Maybe it was a compiler error (only after i'm REALLY REALLY CERTAIN it's not in my code, more likely I or the docs were wrong about what the compiler was supposed to do).

And just in general, in every day life, I'm a lot more relaxed to think that when something seems odd, very often what's wrong is my assumption that it couldn't happen, and there's a perfectly natural explanation if I look for it. Only if I'm REALLY REALLY SURE it's impossible will I get curious.

Date: 2016-03-01 04:24 pm (UTC)
gerald_duck: (Duck of Doom)
From: [personal profile] gerald_duck
This is reminding me of some reasoning I proffered a few years ago which seems closely related, though I complicated matters by going slightly meta and involving morality.

Was it right to switch on the LHC, given the risk of it causing a runaway fubar that annihilated the entire planet?

The physics predicted that this wouldn't happen, that the chance of it happening was vanishingly small.

Ah, but what are the odds that the physics is wrong? Is it fair to say, for example, that at least one scientific theory in a billion turns out to be disastrously wrong?

If so, that's a chance of at least one in a billion of wiping out humanity. Which means on average switching on the LHC kills more than seven people… right?


I guess the way that maps back to your example is through the question "if somebody claims a fair coin has tossed twenty consecutive heads, what is the probability that they are wrong about the coin being fair?" and that's not actually an easy question to answer without more information.

Date: 2016-03-01 05:16 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
The my inner "Doctor John Bayesian" jumps up and down in hopping mad fury at "what is the probability ... not easy to answer without more information". It sayeth: "well, if you're going to be like that, then the probability is either 1 or 0, but that's no help. The Bayesian view is that probabilities are always relative to the information available, if you get more information you may get a better probability but neither the old nor the new probability can be called the probability"

My inner Fat Tony then complains that that's not what people mean by probability, and a blazing row ensues.

One approach to take is to point out the difference between hypothetical situations and the real world; I think that hypothetical situations can be deeply and fundamentally ill-specified in such a way that negates the possibility of some probabilities existing, in a way that the real world can't.

Date: 2016-03-01 10:49 pm (UTC)
gerald_duck: (mallard)
From: [personal profile] gerald_duck
There comes a point where Bayes breaks down because we know so little, though. And talking about whether someone is correct that a coin is fair is beyond that point.

What is the probability, for example, that the thing currently in my left hand is red? How would you characterise the available information?

Date: 2016-03-01 11:20 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
Channeling my inner Doctor John again:

The problem isn't that I know so little. The problem is that I know quite a lot, but have difficulty formalising it. If I have literally no information, then I can use the Principle of Insufficient Reason to say 0.5, or something like that - in various situations there are various ways of constructing uninformative priors that could be used directly as probabilities if no further information is available.

There's Laplace's classic study of the probability of the sun coming up tomorrow based on previous observations of the sun coming up, and there's a nice neat result which underlies all sorts of thought including a good deal of machine learning. However, Laplace himself points out that the result seems absurdly underconfident; knowing all that we know, we'd assign a much higher probability. But how much higher?

Likewise with your possibly-red item. You've had free choice in formulating your example - what sort of person are you? The sort to go for a question with a positive answer or with a negative answer. I could recite the whole Iocaine Powder skit with minor modifications and that would just scratch the surface. I have scads and scads and scads of data - and things too ill-formulated to be called data - about your personality, and no good way to consciously formulate it. And anything could have a bearing on anything else; there are some examples of some huge sample-size surveys in psychology, and with such large samples all correlations turn out to be statistically significant (although some are tiny, and what's a Bayesian doing worrying about "statistical significance" in those terms anyway?)

ETA Inner Fat Tony: so let me get this straight. You're starting off with this "nice" model of probability with all sorts of neat properties, and then you're extrapolating so wildly about formalising so many different things that you're basically talking about building a super AI. In which case I have to say, "I'll believe it when I see it".

Inner Doctor John: well, erm, can I enter a plea bargain? If you throw in some simplifying assumptions you can get some quite nifty machine learning systems which are doing amazing things even today.

Inner Fat Tony: Yeah, yeah. I'll still believe it when I see it. It's one thing to build a machine learning system that's amazing - like a dog walking on its hind legs - and another thing to build one that's actually useful.

Inner Doctor John: well, there's stuff people will pay good money for.

Inner Fat Tony: Hey! That's my line! I've got trademarks on it and everything!

at which point a fight ensues.
Edited Date: 2016-03-01 11:35 pm (UTC)

Date: 2016-03-03 11:06 pm (UTC)
flippac: Extreme closeup of my hair (Default)
From: [personal profile] flippac
So it turns out there's a history of paraconsistent logic in Buddhist traditions long predating Western discovery of it, which would probably have Fat Tony complaining about how long it takes to reach the conclusion he's probably right - but doesn't rely on anything related to probability or fuzziness.

There's at least one good article floating around the web - the one it covered clearly had "we don't know" and "we know this is paradoxical" as truth values.

Me? I'm just a type system nerd, I only work on things isomorphic to formal logics...

Active Recent Entries