Bernard Muller wrote:Please don't get frustrated. I am still digesting your math.
Okay. Here's the Khan Academy course on 'dependent probability':
https://www.khanacademy.org/math/probab ... robability
I also linked some videos previously in this thread, that may help.
Bernard Muller wrote:Peter Kirby wrote:Says who?
I said it.
Two things, though:
(a) It's not true. We can make sense of the idea of "the probability that a word was interpolated" just as well as (no more or less than) the idea of "the probability that a sentence was interpolated." The sentence being interpolated is literally just the intersection of the events of all the words being interpolated. This might be more clear with some familiarity with the set-theoretic foundations behind math and talking about probability. But I also think it shouldn't be hard to grasp, which leads to...
(b) You're saying this as if it "refutes" the counter-example. But that's pointless. The understanding based on the idea that the special form of the probability rule is true in general... was wrong; this is just a counter-example meant to illustrate it. Someone with infinite patience and time could produce infinite examples. In fact, there have been at least a half dozen throughout the thread so far, using various concepts. So, there's no need to expend effort trying to save the idea. It's just not right.
Bernard Muller wrote:Well 0.95 % is certainly acceptable, according to the input data.
Ah, but it isn't. See, the "input data" are
insufficient, mathematically, to reach a conclusion, because it does
not state conditional probability, and conditional probability
is required in order to use the multiplication rule for the intersection of events.[1] So... nope!
And it contradicts any informed common sense. The idea that 1% is close to zero is a very ugly way to misunderstand probability. I could accept that one in a billion or one in a trillion is close to zero, in this context. But 1% would mean that we would expect something like this in roughly 1 out of 100 similar examples. However, the idea that a random jumble of nonsense like Meier's extra words just appeared here, is far more exceptional than 1 in 100. I'm disappointed that this was not your intuition also, because it would be easier if you at least understood straightforward cases like this.
I believe that the subject of "interpolations" is super-contentious and especially sensitive in this thread. It's a really bad way to learn the math concepts. I know you asked for "relatable" things, but it really isn't, because there is no shared understanding. Things that only you understand a particular way, which others understand differently, do not facilitate communication or understanding. We're better off with the card game example (say), which received no comment.
Bernard Muller wrote:I have one question for you: do you have something like what you call the "Obfuscator" in your examples with 10 & 8 events?
I think you do: by calling "evidence of interpolation": ~N (which is apparent), and "interpolation": X (which is not apparent). But I want to make sure.
The "obfuscator" is supposed to represent our uncertainty regarding the truth/falsity of things. In the example with the 10 and the 8 events, it was never certain whether any particular result was true or false. Neither "evidence" or "no evidence" provided full certainty. (The original game with Dr. Q. was slightly different.) However, you were right to point out that the single category does not capture the full range of our beliefs about the strength of evidence (even though those examples didn't treat 'evidence' as proof, there was not any room for levels of 'evidence'). If it really helps, I can possibly do it again with different levels of 'evidence'. But then we'd still be talking about super-contentious examples, and I don't think we're in agreement on general concepts.
I
could do more work to try to explain things, but I'm discouraged by the criticism/arguing/etc., so I don't think it will be any better received than the previous attempts from me and from everyone else who's posted in this thread.
[1] When you know that events are independent,
you know the conditional probability: you know that it's the same as the unconditioned probability. This is a
special case and it's really distorting your understanding because you're treating it as if it's completely natural that P(A | B) = P(A) when that's only
one particular possible value for P(A | B) and
can't be assumed to be true because it very frequently is not true.
"... almost every critical biblical position was earlier advanced by skeptics." - Raymond Brown