In a
recent post, I noted that it is possible to cook up a Bayesian setup
where you don’t meet some threshold, say for belief or knowledge, with
respect to some proposition, but you do meet the same threshold with
respect to the claim that after you examine a piece of evidence, then
you will meet the threshold. This is counterintuitive: it seems
to imply that you can know that you will have enough evidence to know
something even though you don’t yet. In a comment, Ian noted that one
way out of this is to say that beliefs do not correspond to sharp
credences. It then occurred to me that one could use the setup to probe
the question of how sharp our credences are and what the thresholds for
things like belief and knowledge are, perhaps complementarily to the considerations
in this paper.
For suppose we have a credence threshold r and that our intuitions agree that
we can’t have a situation where:
we have transparency as to our credences,
we don’t meet r with
respect to some proposition p,
but
we meet r with respect
to the proposition that we will meet the threshold with respect to p after we examine evidence E.
Let α > 0 be the
“squishiness” of our credences. Let’s say that for one credence to be
definitely bigger than another, their difference has to be at least
α, and that to definitely meet
(fail to meet) a threshold, we must be at least α above (below) it. We assume that
our threshold r is definitely
less than one: r + α ≤ 1.
We now want this constraint on r and α:
- We cannot have a case where (a), (b) and (c) definitely hold.
What does this tell us about r and α? We can actually figure this out.
Consider a test for p that
have no false negatives, but has a false positive rate of β. Let E be a positive test result. Our
best bet to generating a counterexample to (a)–(c) will be if the priors
for p are as close to r as possible while yet definitely
below, i.e., if the priors for p are r − α. For making the
priors be that makes (c) easier to definitely satisfy while keeping (b)
definitely satisfied. Since there are no false negatives, the posterior
for p will be:
- P(p|E) = P(p)/P(E) = (r−α)/(r−α+β(1−(r−α))).
Let z = r − α + β(1−(r−α)) = (1−β)(r−α) + β.
This is the prior probability of a positive test result. We will
definitely meet r on a
positive test result just in case we have (r−α)/z = P(p|E) ≥ r + α,
i.e., just in case
- z ≤ (r−α)/(r+α).
(We definitely won’t meet r
on a negative test result.) Thus to get (c) definitely true, we need (3)
to hold as well as the probability of a positive test result to be at
least r + α:
- z ≥ r + α.
Note that by appropriate choice of β, we can make z be anything between r − α and 1, and the right-hand-side of (3) is at least
r − α since r + α ≤ 1. Thus we can make
(c) definitely hold as long as the right-hand-side of (3) is bigger than
or equal to the right-hand-side of (4), i.e., if and only if:
- (r+α)2 ≤ r − α
or, equivalently:
- α ≤ (1/2)((1+6r−3r2)1/2−1−r).
It’s in fact not hard to see that (6) is necessary and sufficient for
the existence of a case where (a)–(c) definitely hold.
We thus have our joint constraint on the squishiness of our
credences: bad things happen if our credences are so precise as to make
(6) true with respect to a threshold r for which we don’t want (a)–(c) to
definitely hold. The easiest scenario for making (a)–(c) definitely hold
will be a binary test with no false negatives.
We thus have our joint constraint on the squishiness of our
credences: bad things happen if our credences have a level of precision
equal to the right-hand-side of (6). What exactly that says about α depends on where the relevant
threshold lies. If the threshold r is 1/2, the squishiness α is 0.15. That’s surely higher than
the actual squishiness of our credences. So if we are concerned merely
with the threshold being more-likely-than-not, then we can’t avoid the
paradox, because there will be cases where our credence is definitely
below the threshold and it’s definitely above the threshold that
examination of the evidence will push us about the threshold.
But what’s a reasonable threshold for belief? Maybe something like
0.9 or 0.95. At r = 0.9, the squishiness needed for
paradox is α = 0.046. I
suspect our credences are more precise than that. If we agree that the
squishiness of our credences is less than 4.6%, then we have an argument
that the threshold for belief is more than 0.9. On the other hand, at r = 0.95, the squishiness needed for
paradox is 2.4%. At this point, it becomes more plausible that our
credences lack that kind of precision, but it’s not clear. At r = 0.98, the squishiness needed for
paradox dips below 1%. Depending on how precise we think our credences
are, we get an argument that the threshold for belief is something like
0.95 or 0.98.
Here's a graph of the squishiness-for-paradox α against the threshold
r:
Note that the squishiness of our credences likely varies with where
the credences lie on the line from 0 to 1, i.e., varies with respect to
the relevant threshold. For we can tell the difference between
0.999 and 1.000, but we probably can’t tell the difference between 0.700
and 0.701. So the squishiness should probably be counted relative to the
threshold. Or perhaps it should be correlated to log-odds. But I need to
get to looking at grad admissions files now.