Showing posts with label certainty. Show all posts
Showing posts with label certainty. Show all posts

Tuesday, October 5, 2021

Preliminary notes on Cartesian scoring rules

Imagine an agent for whom being certain that a proposition p is true has infinite value if p is in fact true. This could be a general Cartesian attitude about all propositions, or it could be a special attitude to a particular proposition p.

Here is one way to model this kind of Cartesian attitude. Suppose we have a single-proposition accuracy scoring rule s(r, i) which represents the epistemic utility of having credence r when the proposition in fact has truth value i, where i is either 0 (false) or 1 (true). The scores can range over the whole interval [ − ∞, ∞], and I will assume that s(r, i) is finite whenever 0 < r < 1, and continuous at r = 0 and r = 1. Additionally, I suppose that the scoring rule is proper, in the sense that the expected utility of sticking to your current credence r by your own lights is at least as good as the expected utility of any other credence. (When evaluating expected utilities with infinities, I use the rule 0 ⋅ ±∞=0.)

Finally, I say the scoring rule is Cartesian with respect to p provided that s(1, 1)=∞. (We might also have s(0, 0)=∞, but I do not assume it. There are cases where being certain and right that p is much more valuable than being certain and right that ∼p.)

Pretty much all research on scoring rules focuses on regular scoring rules. With a regular scoring rule, is allowed to have an epistemic utility −∞ when you are certain of a falsehood (i.e., s(1, 0)= − ∞ and/or s(0, 1)= − ∞), the possibility of a +∞ epistemic utility is ruled out, and indeed epistemic utilities are taken to be bounded above. Our Cartesian rules are all non-regular.

I’ve been thinking about proper Cartesian scoring rules for about a day, and here are some simple things that I think I can show:

  1. They exist. (As do strictly proper ones.)

  2. One can have an arbitrarily fast rate growth of s(r, 1) as r approaches 1.

  3. However, s(r, 1)/s(r, 0) always goes to zero as r approaches 1.

Claim (2) shows that we can value near-certainty-in-the-truth to an arbitrarily high degree, but there is a price to be paid: one must disvalue near-certainty-in-a-falsehood way more.

One thing that’s interesting to me is that (3) is not true for non-Cartesian proper scoring rules. There are bounded proper scoring rules, and then s(1, 1)/s(1, 0) can be some non-zero ratio. (Relevant to this is this post.) Thus, assuming propriety, going Cartesian—i.e., valuing certainty of truth infinitely—implies an infinitely greater revulsion from certainty in a falsehood.

A consequence of (2) is that you can have proper Cartesian scoring rules that support what one might call obsessive hypothesis confirmation: even if gathering further evidence grows increasingly costly for roughly the same Bayes factors, given a linear conversion between epistemic and practical utilities, it could be worthwhile to continue to continue gathering evidence for a hypothesis no matter how close to certain one is. I don’t think all Cartesian scoring rules support obsessive hypothesis confirmation, however.

Tuesday, November 21, 2017

Omniscience

A standard definition of omniscience is:

  • x is omniscient if and only if x knows all truths and does not believe anything but truths.

But knowing all truths and not believing anything but truths is not good enough for omniscience. One can know a proposition without being certain of it, assigning a credence less than 1 to it. But surely such knowledge is not good enough for omniscience. So we need to say: “knows all truths with absolute certainty”.

I wonder if this is good enough. I am a bit worried that maybe one can know all the truths in a given subject area but not understand how they fit together—knowing a proposition about how they fit together might not be good enough for this understanding.

Anyway, it’s kind of interesting that even apart from open theist considerations, omniscience isn’t quite as cut and dried as one might think.

Monday, March 21, 2016

The certainty of faith

There is a strong Christian tradition of seeing faith as involving certainty. Now, perhaps this certainty is just something like moral certainty (for a sophisticated account, see this) or what I call "security" or "sureness". But it is worth interesting to explore the possibility that faith involves certainty in the full sense of the word, requiring a probability equal to one to be assigned to those propositions that come under faith (this does not imply that faith is exhausted by believing propositions).

There are at least two problems with a certainty reading of faith. The first is with justification: How could someone be justified in assigning a probability of one to propositions as controversial as those of the Christian faith? The second starts with an empirical claim: Typical Christians do not have certainty. Given this, it follows that if faith requires certainty, then typical Christians do not have faith, which is definitely depressing and perhaps not so plausible.

But I think there is a way around both difficulties. The Christian tradition sees faith as a gift of the Holy Spirit. It does not seem problematic that the Holy Spirit would infuse someone with a certainty about a truth that the Holy Spirit himself knows with certainty. There are at least two possibilities here. First, it could be that the right kind of externalism (e.g., reliabilism or reformed epistemology) holds so that the certainty of the beliefs that come from such an infusion is epistemically justified. Second, it could be that there is nothing bad about having epistemically unjustified beliefs when they are in fact true and when the agent is not at fault for their formation and maintenance. It would be better to have justification as well, but it's better to have the true beliefs than to suspend judgment. And, plausibly, the story of the infusion of belief could be spelled out in a way that does not make the agent be at fault.

Regarding the empirical problem, here I am less confident. But here is a suggestion. What makes us think that typical Christians do not have certainty? Presumably that they so report it. Or so I'll assume for the sake of the argument, without spending time checking if there are any worldwide studies that check whether typical Christians report not having certainty.

Presumably, introspection is the primary reason why Christians report not having certainty. But I suspect that introspection is not a very reliable guide in a case like this. It seems to me that there are two primary ways by which we introspect the credence we have in a belief. The first is that we introspect to the evidence we take ourselves to have and assume that our credence matches the evidence. But when our certainty goes beyond the evidence, or at least goes beyond the evidence that we are aware of, this isn't going to be a reliable guide to the credence. The second is direct awareness (often comparative) of our credences. Often this is based on feelings of confidence. But such feelings are, I think, not all that reliable. They provide evidence as to actual confidence, but that evidence is not all that strong. While we should try to avoid error theories all other things being equal, it does not seem so bad to say that Christians tend to be wrong when they ascribe to themselves a credence lower than one.

I am inclined to take the best reading of the Christian tradition to be that faith comes with certainty. I also think that I have faith, but I do not in fact feel certain (at least not in the probability one sense). Given that my best reading of the Christian tradition is that faith comes with certainty, I conclude that probably I am certain, notwithstanding my feelings.

Thursday, January 29, 2015

Wanting to be even more sure

We like being sure. No matter how high our confidence, we have a desire to be more sure, which taken to an extreme becomes a Cartesian desire for absolute certainty. It's tempting to dismiss the desire for greater and greater confidence, when one already has a very high confidence, as irrational.

But the desire is not irrational. Apart from certain moral considerations (e.g., respecting confidentiality) a rational person does not refuse costless information (pace Lara Buchak's account of faith). No matter how high my confidence, as long as it is less than 100%, I may be wrong, and by closing my ears to free data I close myself to being shown to have been wrong, i.e., I close myself to truth. I may think this is not a big deal. After all, if I am 99.9999% sure, then I will think it quite unlikely that I will ever be shown to have been wrong. After all, to be shown to be wrong, I have to actually be wrong ("shown wrong" is factive), and I think the probability that I am wrong is only 0.0001%. Moreover, even if I'm wrong, quite likely further evidence won't get me the vast distance from being 99.9999% sure to being unsure. So it seems like not a big deal to reject new data. Except that it is. First, I have lots of confident beliefs, and while it is unlikely for any particular one of my 99.9999%-sure beliefs to be wrong, the probability that some one of them is wrong is quite a bit higher. And, second, I am a member of a community, and for Kantian reasons I should avoid epistemic policies that make an exception of myself. And of course I want others to be open to evience even when 99.9999% sure, if only because sometimes they are 99.9999% sure of the negation of what I am 99.9999% sure of!

So we want rational people to be open to more evidence. And this puts a constraint on how we value our levels of confidence. Let's say that I do value having at least 99.9999% confidence, but above that level I set no additional premium on my confidence. Then I will refuse costless information when I have reached 99.9999% confidence. I will even pay (perhaps a very small amount) not to hear it! For there are two possibilities. The new evidence might increase my confidence and might decrease it. If it increases it, I gain nothing, since I set no additional premium on higher confidence. If it decreases it, however, I am apt to lose (this may requiring tweaking of the case). And a rational agent will pay to avoid a situation where she is sure to gain nothing and has a possibility of losing.

So it's important that one's desire structure be such that it continue to set a premium on higher and higher levels of confidence. In fact, the desire structure should not only be such that one wouldn't pay to close one's ears to free data, but it should be such that one would always be willing to pay something (perhaps a very small amount) to get new relevant data.

Intuitively, this requires that we value a small increment in confidence more than we disvalue a small decrement. And indeed that's right.

So our desire for greater and greater confidence is indeed quite reasonable.

There is a lesson in the above for the reward structure in science. We should ensure that the rewards in science—say, publishing—do not exhibit thresholds, such as a special premium for a significance level of 0.05 or 0.01. Such thresholds in a reward structure inevitably reward irrational refusals of free information. (Interestingly, though, a threshold for absolute certainty would not reward irrational refusals of free information.)

I am, of course, assuming that we are dealing with rational agents, ones that always proceed by Bayesian update, but who are nonetheless asking themselves whether to gather more data or not. Of course, an irrational agent who sets a high value on confidence is apt to cheat and just boost her confidence by fiat.

Technical appendix: In fact to ensure that I am always willing to pay some small amount to get more information, I need to set a value V(r) on the credence r in such a way that V is a strictly convex function. (The sufficiency of this follows from the fact that the evolving credences of a Bayesian agent are a martingale, and a convex function of a martingale is a submartingale. The necessity follows from some easy cases.)

This line of thought now has a connection with the theory of scoring rules. A scoring rule measures our inaccuracy—it measures how far we are from truth. If a proposition is true and we assign credence r to it, then the scoring rule measures the distance between r and 1. Particularly desirable are strictly proper scoring rules. Now for any (single-proposition) scoring rule, we can measure the agent's own expectation as to what her score is. It turns out that the agent's expectation as to her score is a continuous, bounded, strictly concave function ψ(r) of her credence r and that every continuous, bounded, strictly concave function ψ defines a scoring rule such that ψ(r) is the agent's expectation of her score. (See this paper.) This means that if our convex value function V for levels of confidence is bounded and continuous—not unreasonable assumptions—then that value function V(r) is −ψ(r) where ψ(r) is the agent's expectation as to her score, given a credence of r, according to some strictly proper scoring rule.

In other words, assuming continuity and boundedness, the consideration that agents should value confidence in such a way that they are always willing to gather more data means that they should value their confidence in exactly the way they would if their assignment of value to their confidence was based on self-scoring (i.e., calculating their expected value for their score) their accuracy.

Interestingly, though, I am not quite sure that continuity and boundedness should be required of V. Maybe there is a special premium on certainty, so V is continuous within (0,1) (that's guaranteed by convexity) but has jumps—maybe even infinite ones—at the boundaries.

Wednesday, October 17, 2012

Certainty and probability

Suppose X is a number uniformly chosen in the interval [0,1] (from 0 to 1, both inclusive). Then the probability that X is not 2 is 1, and so is is the probability that X is not 1/2. But intuitively it is certain that X is not 2, while X might be 1/2.

One solution is to bring in infinitesimals. We then say that the probability that X is not 2 is 1, but the probability that X is not 1/2 is 1−a, for an infinitesimal a. Unfortunately, this leads to paradoxes of nonconglomerability.

Here is an alternative. Introduce certainty operator C(p) and C(p|q) operators that work in parallel with probabilities, subject to axioms like:

  1. If C(p), then P(p)=1.
  2. If p is a tautology, then C(p).
  3. If p entails q and C(p), then C(q).
  4. If C(p), then ~C(~p).
  5. If C(p|q) and P(q)>0, then P(p|q)=1.
  6. If C(p1 or p2 or ...) and C(q|pi) for all i, then C(q).
  7. If p entails q and ~C(~p), then C(q|p).
  8. If p entails q and C(p|r), then C(q|r).
  9. If p entails q and ~C(~q) and C(r|q), then C(r|p).
  10. If C(p1 or p2 or ...|r) and C(q|pi and r) for all i, then C(q|r).
I am not proposing these axioms. This is just a suggestion about the form a theory would have. Note that we might even require 6 and 10 for uncountable sequences.

Then, we can say that while the probability that X isn't 2 and the probability that X isn't 1/2 are the same—both are one—the former is certain while the latter isn't.

Friday, March 23, 2012

Choice and certainty

Suppose I am certain that I will choose A? (Maybe an oracle I completely trust has told me, or maybe I have in the past always chosen A; I am not saying anything about the certainty being justified; cf. this paper by David Hunt.) Can I deliberate between A and B, and choose A?

Here is an argument for a positive answer. Suppose I am 99% sure I will choose A. Clearly, I can deliberate between A and B, and freely choose either one (assuming none of the other reasons why I might be unfree apply). The same is true if I am 99.9% sure. And 99.99%. And so on. Moreover, while in such cases it may (though I am not sure even of that) become psychologically harder and harder to choose B, except in exceptional cases (see next paragraph), it should not become psychologically any harder to choose A over B. But if it does not become any harder to choose A over B, why can't I still deliberate and choose A in the limiting case where the certainty is complete?

There will be special cases where this limiting case argument fails. These will be cases where either I suffer from some contingent psychological condition that precludes a choice in the case of certainty or where in the limiting case I lose the reason I had for doing A. For instance, if I am in weird circumstances, there may be actions where the reasons I have for choosing the action depend on my not being certain that I will choose it—maybe you offer me money to choose something that I am not certain I will choose. But apart from such special cases, the probability that I will choose A is irrelevant to my deliberation. And hence it does not enter into my deliberation if I am being rational.

What enters into my deliberation whether to choose A or B are the reasons for and against choosing the options. The probabilities or even certainties of my making one or the other choice normally do not enter into deliberation.

But what if I know that it is impossible to do B? Isn't that relevant? Normally, yes, but that's because it's a straightforward reason against choosing B: one has good reason not to choose to do things which one can't succeed in doing, since in choosing such things, one will be trying and failing, and that is typically an unhappy situation.

But what if I am certain that it is impossible to choose B? Isn't that a reason against choosing B? I am not sure. After all, it might be a reason in favor of choosing B—wouldn't it be really cool to do something impossible? Maybe, though, what the case of impossibility of choice does is it removes all the reasons in favor of choosing B, since reasons involve estimates of expected outcomes, and one cannot estimate the expected outcome of an impossibility, perhaps.

Wednesday, November 9, 2011

48 arguments against naturalism

Consider this argument:

  1. A desire to be morally perfect is morally required for humans.
  2. If naturalism is correct, a desire to be morally perfect cannot be fulfilled for humans.
  3. If a desire cannot be fulfilled for humans, it is not morally required for humans.
  4. Therefore, naturalism is not correct.
This argument provides a schema for a family of arguments. One obtains different members of the family by replacing or disambiguating the underlined terms in different ways.

If one disambiguates "naturalism" as physicalism (reductive or not), one gets an argument against physicalism (reductive or not). If one disambiguates "naturalism" in the Plantinga way as the claim that there is no God or anybody like God, one gets an argument for theism or something like it. Below I will assume the first disambiguation, though I think some versions of the schema will have significant plausibility on the Plantingan disambiguation.

One can replace "morally required" by such terms as "normal", "non-abnormal" or "required for moral perfection".

One can replace "to be morally perfect" by "for a perfect friendship", "to be perfectly happy" or "to know with certainty the basic truths about the nature of reality" or "to know with certainty the basic truths about ethics" or "to have virtue that cannot be lost". While (1) as it stands is quite plausible, with some of these replacements the requiredness versions of (1) become less plausible, but the "non-abnormal" version is still plausible.

Probably the hardest decision is how to understand the "cannot". The weaker the sense of "cannot", the easier it is for (2) to hold but the harder it is for (3) to hold. Thus, if we take "cannot" to indicate logical impossibility, (2) becomes fairly implausible, but (3) is very plausible as above.

I would recommend two options. The first is that the "cannot" indicate causal impossibility. In this case, (3) is very plausible. And (2) has some plausibility for "moral perfection" and all its replacements. For instance, it is plausible that if naturalism is true, certain knowledge of the basic truths about the nature of reality or about ethics is just not causally available. If, further, moral perfection requires certainty about the basic truths of ethics (we might read these as at the normative level for this argument), then moral perfection is something we cannot have. And if we cannot have moral perfection, plausibly we cannot have perfect friendship either. Likewise, if naturalism is true, virtue can always be lost due to some quantum blip in the brain, and if moral perfection requires virtue that cannot be lost, then moral perfection is also unattainable. And perfect happiness requires certain knowledge of its not being such as can be lost. Maybe, though, one could try to argue that moral perfection is compatible with the possibility of losing virtue as long as the loss itself is not originated from within one's character. But in fact if naturalism is true, it is always causally possible to have the loss of virtue originate from within one's character, say because misleading evidence could come up that convinces one that torture is beneficial to people, which then leads to one conscientiously striving to become cruel.

The second option is that the "cannot" is a loosey-goosey "not really possible", weaker than causal impossibility by not counting as possible things that are so extraordinarily unlikely that we wouldn't expect them to happen over the history of humankind. Thus, in this sense, I "cannot" sprout wings, though it seems to be causally possible for my wavefunction to collapse into a state that contains wings. Premise (2) is now even more plausible, including for all the substituents, while premise (3) still has some plausibility, especially where we stick to the "morally required" or "required for moral perfection", and make the desire be a desire for moral perfection.

If I am counting correctly, if we keep "naturalism" of the non-Plantingan sort, but allow all the other variations in the argument, we get 48 arguments against naturalism, though not all independent. Or we can disjoin the conjunctions of the premises, and get an argument with one premise that is a disjunction of 48 conjunctions of three premises. :-)

Saturday, July 10, 2010

Certainty of faith

The Christian tradition is clear on Christian faith being certain, a certainty due to the Holy Spirit, either in revealing the doctrines of faith or in enlightening the believer's mind or both. But as an empirical fact, Christians, even ones with a genuine living faith, do struggle in faith, and at times their belief appears quite uncertain.

If the certainty of faith is defined by a feeling of certainty, the Christians don't have certainty. But that is surely the wrong way to think about. After all, the Christian who has living faith has faith even while asleep (otherwise, no one who died in sleep could be saved!). I suppose that was a cheap shot, since one could define certainty in terms of a disposition to feel certain in appropriate circumstances, but the whole idea of taking a feeling to be central to faith misses the phenomenon of the dark night of faith.

I want to suggest that certainty is tied to ungiveupability. In other words, a belief is the less certain, the more willing we would be to give it up.

One of going from this is to spell this out in terms of commitment. My belief that p is certain to the extent that I am committed to believing p. Now, when I am committed to doing something, I am typically committed to doing it except in exceptional circumstances. And the degree of commmitment can be measured by the amount of exceptional circumstances and their probability—the more exceptions there are and the more likely the exceptions to befall me, the less committed I am. Maximal commitment to A, then, would be commitment to A in all possible circumstances in which one could find oneself.

However, commitment is primarily a normative matter. One can be normatively committed to doing something even if one has no psychological attitude in favor of it. I can be normatively committed to never eating meat while (intentionally) munching on a steak—all it would take would be for me to have just promised never to eat meat. We call this "infidelity to one's commitments", and it is essential to this that the commitment still be binding on one. If commitment is a normative matter of this sort, then the fact that Christians may feel uncertain is beside the point. What matter is that they are committed, i.e., that they are under an obligation (maybe we need to add: that they themselves undertook), not that they feel committed. In other words, the Christian's certainty consists in its being the case that she is obliged to believe, no matter what.

But that can't be the whole story about certainty. The reason is that this story is compatible with unbelief. After all, if one can be unfaithful to one's commitments, and if the certainty of belief were just commitment to believe, then one could have the certainty of belief without belief. And that's absurd. We could add that to have the certainty of belief you need to believe, but the link between the belief and the certainty surely needs to be tighter. Nor will it help to add that the commitment has to be subjectively acknowledged (this subjective acknowledgment of a commitment is sometimes itself called "commitment"). For I can acknowledge an obligation not to eat meat while munching on a steak.

Here is another suggestion. Instead of reading the certainty normatively, read it causally. Here is one take on it. A person x is certain in believing p to the degree that it would be difficult to make her cease to believe p. If, given the present state of the world, it is altogether causally impossible for x to cease to believe p, then we can say that x has absolutely unshakeable certainty that p. Maybe some who believe in the doctrine Reformed folks call "the perseverance of the saints" believe that the elect have this sort of certainty. But this merely causal unshakeability may not be suited to an analysis of certainty, which in the context of the Christian faith seems to have an epistemic component: not only is it that the Christian's faith can't (in some sense to be explained shortly) be shaken, but it is appropriate, epistemically and morally, that it not be shaken. Moreover, I do not think we should commit to "the perseverance of the saints" as understood by the Reformed—there are too many warnings in Scripture about the danger of falling away (though I know the Reformed have their own readings of those). Therefore, we should allow for the possibility that x might freely choose to be irrational and stop believing.

The above remarks fit well with the following account (which I am not actually endorsing as a complete story; I like the thought that this account and my previous normative account should be conjoined):

  • x has appropriately unshakeable certainty that p if and only if x believes that p and it is causally necessary that x continue to epistemically obligatorily (or, for what would in practice tend to be a weaker definition: appropriately) believe that p so long as x refrains from gravely immoral actions.
Why "gravely"? Because the certainty would be too fragile if it could be lost by any immoral action.

Notice what appropriate unshakeability does not require. First, it does not require that there be a Bayesian credence of one in p. Nor does it require that there be no metaphysically evidence E such that P(p|E) is small. All that's required is that it be causally necessary either that x not observe E or that should x observe E, something should turn up (by the grace of God, immediate or mediate) that would continue to render it epistemically obligatory (or at least permissible) to believe it. The above story is compatible with multiple explanations for the necessity claim. For instance, maybe, the Holy Spirit enlightens the mind in such a way as to internally provide evidence stronger than any that is possible. Or maybe God is committed to ensuring that one will not meet with any strong counter-evidence. Or maybe one has credence 1 and God will help one remain Bayesian-rational and not change that 1.

Whence the causal necessity? I suppose it would be grounded in the promises of God—it is causally impossible that the promises of God not be fulfilled. Or maybe oeconomic necessity would be better here than causal necessity—however, I think oeconomic necessity is a special case of nomic necessity.

I am grateful to Trent Dougherty for a lot of discussions of certainty, out of which this post flows.