Showing posts with label threshold. Show all posts
Showing posts with label threshold. Show all posts

Monday, January 20, 2025

Open-mindedness and epistemic thresholds

Fix a proposition p, and let T(r) and F(r) be the utilities of assigning credence r to p when p is true and false, respectively. The utilities here might be epistemic or of some other sort, like prudential, overall human, etc. We can call the pair T and F the score for p.

Say that the score T and F is open-minded provided that expected utility calculations based on T and F can never require you to ignore evidence, assuming that evidence is updated on in a Bayesian way. Assuming the technical condition that there is another logically independent event (else it doesn’t make sense to talk about updating on evidence), this turns out to be equivalent to saying that the function G(r) = rT(r) + (1−r)F(r) is convex. The function G(r) represents your expected value for your utility when your credence is r.

If G is a convex function, then it is continuous on the open interval (0,1). This implies that if one of the functions T or F has a discontinuity somewhere in (0,1), then the other function has a discontinuity at the same location. In particular, the points I made in yesterday’s post about the value of knowledge and anti-knowledge carry through for open-minded and not just proper scoring rules, assuming our technical condition.

Moreover, we can quantify this discontinuity. Given open-mindedness and our technical condiiton, if T has a jump of size δ at credence r (e.g., in the sense that the one-sided limits exist and differ by y), then F has a jump of size rδ/(1−r) at the same point. In particular, if r > 1/2, then if T has a jump of a given size at r, F has a larger jump at r.

I think this gives one some reason to deny that there are epistemically important thresholds strictly between 1/2 and 1, such as the threshold between non-belief and belief, or between non-knowledge and knowledge, even if the location of the thresholds depends on the proposition in question. For if there are such thresholds, then now imagine cases of propositions p with the property that it is very important to reach a threshold if p is true while one’s credence matters very little if p is false. In such a case, T will have a larger jump at the threshold than F, and so we will have a violation of open-mindedness.

Here are three examples of such propositions:

  • There are objective norms

  • God exists

  • I am not a Boltzmann brain.

There are two directions to move from here. The first is to conclude that because open-mindedness is so plausible, we should deny that there are epistemically important thresholds. The second is to say that in the case of such special propositions, open-mindedness is not a requirement.

I wondered initially whether a similar argument doesn’t apply in the absence of discontinuities. Could one have T and F be openminded even though T continuously increases a lot faster than F decreases? The answer is positive. For instance the pair T(r) = e10r and F(r) =  − r is open-minded (though not proper), even though T increases a lot faster than F decreases. (Of course, there are other things to be said against this pair. If that pair is your utility, and you find yourself with credence 1/2, you will increase your expected utility by switching your credence to 1 without any evidence.)

Tuesday, January 24, 2023

Thresholds and precision

In a recent post, I noted that it is possible to cook up a Bayesian setup where you don’t meet some threshold, say for belief or knowledge, with respect to some proposition, but you do meet the same threshold with respect to the claim that after you examine a piece of evidence, then you will meet the threshold. This is counterintuitive: it seems to imply that you can know that you will have enough evidence to know something even though you don’t yet. In a comment, Ian noted that one way out of this is to say that beliefs do not correspond to sharp credences. It then occurred to me that one could use the setup to probe the question of how sharp our credences are and what the thresholds for things like belief and knowledge are, perhaps complementarily to the considerations in this paper.

For suppose we have a credence threshold r and that our intuitions agree that we can’t have a situation where:

  1. we have transparency as to our credences,

  2. we don’t meet r with respect to some proposition p, but

  3. we meet r with respect to the proposition that we will meet the threshold with respect to p after we examine evidence E.

Let α > 0 be the “squishiness” of our credences. Let’s say that for one credence to be definitely bigger than another, their difference has to be at least α, and that to definitely meet (fail to meet) a threshold, we must be at least α above (below) it. We assume that our threshold r is definitely less than one: r + α ≤ 1.

We now want this constraint on r and α:

  1. We cannot have a case where (a), (b) and (c) definitely hold.

What does this tell us about r and α? We can actually figure this out. Consider a test for p that have no false negatives, but has a false positive rate of β. Let E be a positive test result. Our best bet to generating a counterexample to (a)–(c) will be if the priors for p are as close to r as possible while yet definitely below, i.e., if the priors for p are r − α. For making the priors be that makes (c) easier to definitely satisfy while keeping (b) definitely satisfied. Since there are no false negatives, the posterior for p will be:

  1. P(p|E) = P(p)/P(E) = (rα)/(rα+β(1−(rα))).

Let z = r − α + β(1−(rα)) = (1−β)(rα) + β. This is the prior probability of a positive test result. We will definitely meet r on a positive test result just in case we have (rα)/z = P(p|E) ≥ r + α, i.e., just in case

  1. z ≤ (rα)/(r+α).

(We definitely won’t meet r on a negative test result.) Thus to get (c) definitely true, we need (3) to hold as well as the probability of a positive test result to be at least r + α:

  1. z ≥ r + α.

Note that by appropriate choice of β, we can make z be anything between r − α and 1, and the right-hand-side of (3) is at least r − α since r + α ≤ 1. Thus we can make (c) definitely hold as long as the right-hand-side of (3) is bigger than or equal to the right-hand-side of (4), i.e., if and only if:

  1. (r+α)2 ≤ r − α

or, equivalently:

  1. α ≤ (1/2)((1+6r−3r2)1/2−1−r).

It’s in fact not hard to see that (6) is necessary and sufficient for the existence of a case where (a)–(c) definitely hold.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences are so precise as to make (6) true with respect to a threshold r for which we don’t want (a)–(c) to definitely hold. The easiest scenario for making (a)–(c) definitely hold will be a binary test with no false negatives.

We thus have our joint constraint on the squishiness of our credences: bad things happen if our credences have a level of precision equal to the right-hand-side of (6). What exactly that says about α depends on where the relevant threshold lies. If the threshold r is 1/2, the squishiness α is 0.15. That’s surely higher than the actual squishiness of our credences. So if we are concerned merely with the threshold being more-likely-than-not, then we can’t avoid the paradox, because there will be cases where our credence is definitely below the threshold and it’s definitely above the threshold that examination of the evidence will push us about the threshold.

But what’s a reasonable threshold for belief? Maybe something like 0.9 or 0.95. At r = 0.9, the squishiness needed for paradox is α = 0.046. I suspect our credences are more precise than that. If we agree that the squishiness of our credences is less than 4.6%, then we have an argument that the threshold for belief is more than 0.9. On the other hand, at r = 0.95, the squishiness needed for paradox is 2.4%. At this point, it becomes more plausible that our credences lack that kind of precision, but it’s not clear. At r = 0.98, the squishiness needed for paradox dips below 1%. Depending on how precise we think our credences are, we get an argument that the threshold for belief is something like 0.95 or 0.98.

Here's a graph of the squishiness-for-paradox α against the threshold r:

Note that the squishiness of our credences likely varies with where the credences lie on the line from 0 to 1, i.e., varies with respect to the relevant threshold. For we can tell the difference between 0.999 and 1.000, but we probably can’t tell the difference between 0.700 and 0.701. So the squishiness should probably be counted relative to the threshold. Or perhaps it should be correlated to log-odds. But I need to get to looking at grad admissions files now.

Monday, March 26, 2018

Thresholds and credence

Suppose we have some doxastic or epistemic status—say, belief or knowledge—that involves a credence threshold, such as that to count as believing p, you need to assign a credence of, say, at least 0.9 to p. I used to think that propositions that meet the threshold are apt to have credences distributed somewhat uniformly between the threshold or 1. But now I think this may be completely wrong.

Toy model: A perfectly rational agent has a probability space with N options and assigns equal credence to each option. There are 2N propositions (up to logical equivalence) that can be formed concerning the N options, e.g., “option 1 or option 2 or option 3”, one for each subset of the N options.

Given the toy model, for a threshold that is not too close to 0.5, and for a moderately large N (say, 10 or more), most of the 2N propositions that meet the threshold condition meet it just barely. The reason for that is this. A proposition can be identified with a subset of {1, ..., N}. The probability of the proposition is k/N where k is the number of elements in the subset. For any integer k between 0 and N, the number of propositions that have probability k/N will then be the binomial coefficient k!(N − k)!/N!. But when we look at this as a function of k, it will have roughly a normal distribution with standard deviation σ = N1/2/2 and center at N/2, and that distribution decays very fast, so most of the propositions that have probability at least k/N will have probability pretty close to k/N if k/N − 1/2 is significantly bigger than 1/N1/2.

I should have some graphs here, but it’s a really busy week.

Friday, March 23, 2018

Conjunctions and thresholds

Consider some positive epistemic or doxastic concept E, say knowledge or belief. Suppose that (maybe for a fixed context) E requires a credence threshold t0: a proposition only falls under E when the credence is t0 or higher.

Unless the non-credential stuff really, really cooperates, we wouldn’t expect to have closure under conjunction for all cases of E. For if p and q are cases of E that just barely satisfy the credential threshold condition, we wouldn’t expect their conjunction to satisfy it.

Question: Do we have any right to expect closure under conjunction typically, at least with respect to the credential condition? I.e., if p and q are randomly chosen distinct cases of E, is it reasonable to expect that their conjunction falls above the threshold?

Simple Model: The credences of our Es can fall anywhere between t0 and 1. Let’s suppose that the distribution of the credences is uniform between t0 and 1. Suppose, two, that distinct Es are statistically independent, so that the probability of the conjunction is the product of the probabilities.

Then there is a simple formula for the probability that the conjunction of randomly chosen distinct Es satisfy the credential threshold condition: (p0log p0 + (1 − p0))/(1 − p0)2. (Fix one credence between p0 and 1, and calculate the probability that the other credence satisfies the condition; then integrate from p0 and 1 and divide by 1 − p0.) We can plug some numbers in.

  • At threshold 0.5, probability of conjunction above threshold: 0.61

  • At threshold 0.75, probability of conjunction above threshold: 0.55

  • At threshold 0.9, probability of conjunction above threshold: 0.52

  • At threshold 0.95, probability of conjunction above threshold: 0.51

  • At threshold 0.99, probability of conjunction above threshold: 0.502

And the limit as threshold approaches 1 is 1/2.

So, it’s more likely than not that the conjunction satisfies the credential threshold, but on the other hand the probability is not high enough that we can say that it’s typically the conjunction satisfies the threshold.

But the model has two limitations which will affect the above.

Limitation 1: Intuitively, propositions with positive epistemic or doxastic status are more likely to have a credence closer to the low end of the [t0, 1] interval, rather than being uniformly distributed over it. This is going to make the probability of the conjunction meeting the threshold be lower than the Simple Model predicts.

Limitation 2: Even without being coherentists, we would expect that our doxastic states to “hang together”. Thus, typically, we would expect that if p and q are propositions that have a credence significantly above 1/2, then typically p and q will have a positive statistical correlation (with respect to credences), so that P(p ∧ q)>P(p)P(q), rather than being independent. This means that the Simple Model underestimates the how often the conjunction is above the threshold. In the extreme case that all our doxastic states are logically equivalent, the conjunction will always meet the threshold condition. In more typical cases, the correlation will be weaker, but we would still expect a significant credential correlation.

So it may well be that even if one takes into account Limitation 1, taking into account Limitation 2 will allow one to say that typically conjunctions of Es meet the threshold condition.

Acknowledgment: I am grateful to John Hawthorne for a discussion of closure and thresholds.