The problem isn't that I know so little. The problem is that I know quite a lot, but have difficulty formalising it. If I have literally no information, then I can use the Principle of Insufficient Reason to say 0.5, or something like that - in various situations there are various ways of constructing uninformative priors that could be used directly as probabilities if no further information is available.
There's Laplace's classic study of the probability of the sun coming up tomorrow based on previous observations of the sun coming up, and there's a nice neat result which underlies all sorts of thought including a good deal of machine learning. However, Laplace himself points out that the result seems absurdly underconfident; knowing all that we know, we'd assign a much higher probability. But how much higher?
Likewise with your possibly-red item. You've had free choice in formulating your example - what sort of person are you? The sort to go for a question with a positive answer or with a negative answer. I could recite the whole Iocaine Powder skit with minor modifications and that would just scratch the surface. I have scads and scads and scads of data - and things too ill-formulated to be called data - about your personality, and no good way to consciously formulate it. And anything could have a bearing on anything else; there are some examples of some huge sample-size surveys in psychology, and with such large samples all correlations turn out to be statistically significant (although some are tiny, and what's a Bayesian doing worrying about "statistical significance" in those terms anyway?)
ETA Inner Fat Tony: so let me get this straight. You're starting off with this "nice" model of probability with all sorts of neat properties, and then you're extrapolating so wildly about formalising so many different things that you're basically talking about building a super AI. In which case I have to say, "I'll believe it when I see it".
Inner Doctor John: well, erm, can I enter a plea bargain? If you throw in some simplifying assumptions you can get some quite nifty machine learning systems which are doing amazing things even today.
Inner Fat Tony: Yeah, yeah. I'll still believe it when I see it. It's one thing to build a machine learning system that's amazing - like a dog walking on its hind legs - and another thing to build one that's actually useful.
Inner Doctor John: well, there's stuff people will pay good money for.
Inner Fat Tony: Hey! That's my line! I've got trademarks on it and everything!
no subject
Date: 2016-03-01 11:20 pm (UTC)The problem isn't that I know so little. The problem is that I know quite a lot, but have difficulty formalising it. If I have literally no information, then I can use the Principle of Insufficient Reason to say 0.5, or something like that - in various situations there are various ways of constructing uninformative priors that could be used directly as probabilities if no further information is available.
There's Laplace's classic study of the probability of the sun coming up tomorrow based on previous observations of the sun coming up, and there's a nice neat result which underlies all sorts of thought including a good deal of machine learning. However, Laplace himself points out that the result seems absurdly underconfident; knowing all that we know, we'd assign a much higher probability. But how much higher?
Likewise with your possibly-red item. You've had free choice in formulating your example - what sort of person are you? The sort to go for a question with a positive answer or with a negative answer. I could recite the whole Iocaine Powder skit with minor modifications and that would just scratch the surface. I have scads and scads and scads of data - and things too ill-formulated to be called data - about your personality, and no good way to consciously formulate it. And anything could have a bearing on anything else; there are some examples of some huge sample-size surveys in psychology, and with such large samples all correlations turn out to be statistically significant (although some are tiny, and what's a Bayesian doing worrying about "statistical significance" in those terms anyway?)
ETA Inner Fat Tony: so let me get this straight. You're starting off with this "nice" model of probability with all sorts of neat properties, and then you're extrapolating so wildly about formalising so many different things that you're basically talking about building a super AI. In which case I have to say, "I'll believe it when I see it".
Inner Doctor John: well, erm, can I enter a plea bargain? If you throw in some simplifying assumptions you can get some quite nifty machine learning systems which are doing amazing things even today.
Inner Fat Tony: Yeah, yeah. I'll still believe it when I see it. It's one thing to build a machine learning system that's amazing - like a dog walking on its hind legs - and another thing to build one that's actually useful.
Inner Doctor John: well, there's stuff people will pay good money for.
Inner Fat Tony: Hey! That's my line! I've got trademarks on it and everything!
at which point a fight ensues.