Showing posts with label commitment. Show all posts
Showing posts with label commitment. Show all posts

Saturday, May 11, 2024

What is it like not to be hedging?

Plausibly, a Christian commitment prohibits hedging. Thus in some sense even if one’s own credence in Christianity is less than 100%, one should act “as if it is 100%”, without hedging one’s bets. One shouldn’t have a backup plan if Christianity is false.

Understanding what this exactly means is difficult. Suppose Alice has Christian commitment, but her credence in Christianity is 97%. If someone asks Alice her credence in Christianity, she should not lie and say “100%”, even though that is literally acting “as if it is 100%”.

Here is a more controversial issue. Suppose Alice has a 97% credence in Christianity, but has the opportunity to examine a piece of evidence which will settle the question one way or the other—it will make her 100% certain Christianity is true or 100% certain it’s not. (Maybe she has an opportunity for a conversation with God.) If she were literally acting as if her credence were 100%, there would be no point to looking at any more evidence. But that seems the wrong answer. It seems to be a way of being scared that the evidence will refute Christianity, but that kind of a fear is opposed to the no-hedge attitude.

Here is a suggestion about how no-hedge decision-making should work. When I think about my credences, say in the context of decision-making, I can:

  1. think about the credences as psychological facts about me, or

  2. regulate my epistemic and practical behavior by the credences (use them to compute expected values, etc.).

The distinction between these two approaches to my credences is really clear from a third-person perspective. Bob, who is Alice’s therapist, thinks about Alice’s credences as psychological facts about her, but does not regulate his own behavior by these credences: Alice’s credences have a psychologically descriptive role for Bob but not a regulative role for Bob in his actions. In fact, they probably don’t even have a regulative role for Bob when he thinks about what actions are good for Alice. If Alice has a high credence in the danger of housecats, and Bob does not, Bob will not encourage Alice to avoid housecats—on the contrary, he may well try to change Alice’s credence, in order to get Alice to act more normally around them.

So, here is my suggestion about no-hedging commitments. When you have a no-hedging commitment to a set of claims, you regulate your behavior by them as if the claims had credence 100%, but when you take the credences into account as psychological facts about you, you give them the credence they actually have.

(I am neglecting here a subtle issue. Should we regulate our behavior by our credences or by our opinion about our credences? I suspect that it is by our credences—else a regress results. If that’s right, then there might be a very nice way to clarify the distinction between taking credences into account as psychological facts and taking them into account as regulative facts. When we take them into account as psychological facts, our behavior is regulated by our credences about the credences. When we take them into account regulatively, our behavior is directly regulated by the credences. If I am right about this, the whole story becomes neater.)

Thus, when Alice is asked what her credence in Christianity is, her decision of how to answer depends on the credence qua psychological fact. Hence, she answers “97%”. But when Alice decides whether or not to engage in Christian worship in a time of persecution, her decision on how to answer would normally depend on the credence qua regulative, and so she does not take into account the 3% probability of being wrong about Christianity—she just acts as if Christianity were certain.

Similarly, when Alice considers whether to look at a piece of evidence that might raise or lower her credence in Christianity, she does need to consider what her credence is as a psychological fact, because her interest is in what might happen to her actual psychological credence.

Let’s think about this in terms of epistemic utilities (or accuracy scoring rules). If Alice were proceeding “normally”, without any no-hedge commitment, when she evaluates the expected epistemic value of examining some piece of evidence—after all, it may be practically costly to examine it (it may involve digging in an archaeological site, or studying a new language)—she needs to take her credences into account in two different ways: psychologically when calculating the potential for epistemic gain from her credence getting closer to the truth and potential for epistemic loss from her credence getting further from the truth, and regulatively when calculating the expectations as well as when thinking about what is or is not true.

Now on to some fun technical stuff. Let ϕ(r,t) be the epistemic utility of having credence r in some fixed hypothesis of interest H when the truth value is t (which can be 0 or 1). Let’s suppose there is no as-if stuff going on, and I am evaluating the expected epistemic value of examining whether some piece of evidence E obtains. Then if P indicates my credences, the expected epistemic utility of examining the evidence is:

  1. VE = P(H)(P(E|H)ϕ(P(H|E),1)+P(∼E|H)ϕ(P(H|E),1)) + P(∼H)(P(E|∼H)ϕ(P(H|E),0)+P(∼E|∼H)ϕ(P(H|∼E),0)).

Basically, I am partitioning logical space based on whether H and E obtain.

Now, in the as-if case, basically the agent has two sets of credences: psychological credences and regulative credences, and they come apart. Let Ψ and R be the two. Then the formula above becomes:

  1. VE = R(H)(R(E|H)ϕ(Ψ(H|E),1)+R(∼E|H)ϕ(Ψ(H|∼E),1)) + R(∼H)(R(E|∼H)ϕ(Ψ(H|E),0)+R(∼E|∼H)ϕ(Ψ(H|∼E),0)).

The no-hedging case that interests us makes R(H) = 1: we regulatively ignore the possibility that the hypothesis is false. Our expected value of examining whether E obtains is then:

  1. VE = R(E|H)ϕ(Ψ(H|E),1) + R(∼E|H)ϕ(Ψ(H|∼E),1).

Let’s make a simplifying assumption that the doctrines that we are as-if committed to do not affect the likelihoods P(E|H) and P(E|∣H) (granted the latter may be a bit fishy if P(H) = 1, but let’s suppose we have Popper functions or something like that to take care of that), so that R(E|H) = Ψ(E|H) and R(E|∣H) = Ψ(E|∣H).

We then have:

  1. Ψ(H|E) = Ψ(H)R(E|H)/(R(E|H)Ψ(H)+R(E|∼H)Ψ(∼H)).

  2. Ψ(H|∼E) = Ψ(H)R(∼E|H)/(R(∼E|H)Ψ(H)+R(∼E|∼H)Ψ(∼H)).

Assuming Alice has a preferred scoring rule, we now have a formula that can guide Alice what evidence to look at: she can just check whether VE is bigger than ϕ(Ψ(H),1), which is her current score regulatively evaluated, i.e., evaluated in the as-if H is true way. If VE is bigger, it’s worth checking whether E is true.

One might hope for something really nice, like that if the scoring rule ϕ is strictly proper, then it’s always worth looking at the evidence. Not so, alas.

It’s easy to see that VE beats the current epistemic utility when E is perfectly correlated with H, assuming ϕ(x,1) is strictly monotonic increasing in x.

Surprisingly and sadly, numerical calculations with the Brier score ϕ(x,t) =  − (xt)2 show that if Alice’s credence is 0.97, then unless the Bayes’ factor is very far from 1, current epistemic utility beats VE, and so no-hedging Alice should not look at the evidence, except in rare cases where the evidence is extreme. Interestingly, though, if Alice’s current credence were 0.5, then Alice should always look at the evidence. I suppose the reason is that if Alice is at 0.97, there is not much room for her Brier score to go up assuming the hypothesis is correct, but there is a lot of room for her score to go down. If we took seriously the possibility that the hypothesis could be false, it would be worth examining the evidence just in case the hypothesis is false. But that would be a form of hedging.

Wednesday, August 1, 2018

"Commitment": Phenomenology at the rock wall

If you watch people rock climbing enough (in my case, only in the gym, as I have seen disturbing outdoor climbing safety numbers, while gym climbing safety numbers are excellent), you will hear a climber get advised to “commit” more. The context is usually a dynamic move for a hold, one where the climber’s momentum is essential to getting into position to secure the hold, with the paradigm example being a literal jump. The main physiological brunt of the advice to “commit” is to put greater focused effort into jumping higher, reaching further, grabbing more strongly, etc. But the phenomenological brunt of the advice is to will more strongly, with greater, well, commitment. And sometimes when one misses a move, one feels the miss as due to a lack of commitment, a failure to will strongly enough.

While once I heard someone at the gym say “Commit like you’re married to it”, the notion of commitment here seems quite different from the ordinary notion tied to relationships and long-term projects. The most obvious difference is that of time. In the ordinary case, a central component of commitment is a willingness to stick to something for an extended period of time. The climber’s “commitment” lasts at most a second or two. This results in what seems to be a qualitatively different phenomenology, but it could still be that the difference is quantitative, much as living through a week and living through a second only feels qualitatively different.

But there seems to be a more obviously qualitative difference. The rock-climbing sense of “commit” is essentially occurrent: there is an actual expending of effort. But the ordinary sense is largely dispositional: one would expend the effort if it were called for. Moreover, the rock-climbing sense of the word is typically tied to near-maximal effort, while in the ordinary sense one counts as committed to a project as long as one is willing to expend a reasonable amount of effort. In other words, when it would be unreasonable to expend a certain degree of effort, in the ordinary sense of the word one is not falling short of commitment: the employee unwilling to sacrifice a marriage to the job is not short on commitment to the job. The rock-climbing sense of commitment is not tied to reasonableness: a climber who holds back on a move out of a reasonable judgment that near-maximal effort would be too likely to result in an injury is failing to commit on the move—and typically is doing the right thing under the circumstances (of course in both sense of the word “commit”, there are times when failure to commit is the right thing to do).

Finally, the ordinary sense of divides into normative and non-normative commitment. Normative commitment is a kind of promise—implicit perhaps—while non-normative commitment is an actual dispositional state. Each can exist without the other (though it is typically vicious when the normative exists without the dispositional). In the climbing case, normally the normative component is missing: one hasn’t done anything promise-like.

Here is a puzzle. Bracket the cases where one holds back to avoid an over-use or impact injury (I would guess, without actually looking up the medical data, that when one is expanding more effort, one is more tense and injury is likely to be worse). One also understands why someone might fail to commit to a job or a relationship, in either the normative or the non-normative sense: a better thing might come one’s way. But when one is in the middle of a strenous climbing move, one typically isn’t thinking that one might have something better one could do with this second of one’s time. So: Why would someone fail to commit?

My phenomenology answers in two ways. First and foremost, fear of failure. This is rationally puzzling. One knows that a failure to commit to a climbing move increases the probability of failure. So at first sight, it seems like someone who goes to a dog show out of a fear of dogs (which is different from the understandable case of someone who goes to a dog show because of a fear of dogs, e.g., in order to fight the fear or in order to have a scary experience). But I think there is actually something potentially rational here. There are two senses of failure. One sense is external: one is failing to meet some outside standard. The second sense is internal to action: one is failing to do what one is trying to do. The two can be at cross-purposes: if I have decided to throw a table tennis match, my gaining a point can be a failure in the action-internal sense but is a success in the external sense.

In climbing, outside of competitive settings, it is the action-internal sense that tends to be more salient: we set our own goals, and what constitutes them as our goals is our willing of them. Is my goal to climb this route, to see how hard it is, or just to get some exercise? It’s all in what I am trying to do.

But in the action-internal sense, generally the badness of a failure increases with the degree to which one is trying. If I am not trying very hard, my failure is not very bad in the action-internal sense. (Of course, in some cases, my failure to try very hard might bad in some external sense, even a moral one—but it might not be.) So by trying less hard, one is minimizing the badness of a failure. There is a complex rational calculus here, whether or not one takes into account risk averseness. It is easy to see how one might decide, correctly or not, that giving significantly less than one’s greatest effort is the best option (and this is true even without risk averseness).

The secondary, but also interesting, reason that my phenomenology gives for a refusal to commit is that effort can be hard and unpleasant.

Monday, January 27, 2014

Conditional commitments

Compare:

  1. Assuming you pass at least one of your classes this spring, we will hire you in May.
  2. We will hire you in May.

To a literalist it sounds like 2 makes the stronger commitment than 1.

But suppose that you get the lowest passing grade in one of your classes, and Fs in all the others. Then if I said 2, I could say: "Well, of course, but I was assuming half-decent performance." But if I said 1, I can't say that!

What's going on? Normally when I say what I will do, there are some unstated conditions. But when I get into the business of stating conditions, I had better list all of them, or at least all the ones that are likely to be as relevant as the ones I list.

Saturday, July 10, 2010

Certainty of faith

The Christian tradition is clear on Christian faith being certain, a certainty due to the Holy Spirit, either in revealing the doctrines of faith or in enlightening the believer's mind or both. But as an empirical fact, Christians, even ones with a genuine living faith, do struggle in faith, and at times their belief appears quite uncertain.

If the certainty of faith is defined by a feeling of certainty, the Christians don't have certainty. But that is surely the wrong way to think about. After all, the Christian who has living faith has faith even while asleep (otherwise, no one who died in sleep could be saved!). I suppose that was a cheap shot, since one could define certainty in terms of a disposition to feel certain in appropriate circumstances, but the whole idea of taking a feeling to be central to faith misses the phenomenon of the dark night of faith.

I want to suggest that certainty is tied to ungiveupability. In other words, a belief is the less certain, the more willing we would be to give it up.

One of going from this is to spell this out in terms of commitment. My belief that p is certain to the extent that I am committed to believing p. Now, when I am committed to doing something, I am typically committed to doing it except in exceptional circumstances. And the degree of commmitment can be measured by the amount of exceptional circumstances and their probability—the more exceptions there are and the more likely the exceptions to befall me, the less committed I am. Maximal commitment to A, then, would be commitment to A in all possible circumstances in which one could find oneself.

However, commitment is primarily a normative matter. One can be normatively committed to doing something even if one has no psychological attitude in favor of it. I can be normatively committed to never eating meat while (intentionally) munching on a steak—all it would take would be for me to have just promised never to eat meat. We call this "infidelity to one's commitments", and it is essential to this that the commitment still be binding on one. If commitment is a normative matter of this sort, then the fact that Christians may feel uncertain is beside the point. What matter is that they are committed, i.e., that they are under an obligation (maybe we need to add: that they themselves undertook), not that they feel committed. In other words, the Christian's certainty consists in its being the case that she is obliged to believe, no matter what.

But that can't be the whole story about certainty. The reason is that this story is compatible with unbelief. After all, if one can be unfaithful to one's commitments, and if the certainty of belief were just commitment to believe, then one could have the certainty of belief without belief. And that's absurd. We could add that to have the certainty of belief you need to believe, but the link between the belief and the certainty surely needs to be tighter. Nor will it help to add that the commitment has to be subjectively acknowledged (this subjective acknowledgment of a commitment is sometimes itself called "commitment"). For I can acknowledge an obligation not to eat meat while munching on a steak.

Here is another suggestion. Instead of reading the certainty normatively, read it causally. Here is one take on it. A person x is certain in believing p to the degree that it would be difficult to make her cease to believe p. If, given the present state of the world, it is altogether causally impossible for x to cease to believe p, then we can say that x has absolutely unshakeable certainty that p. Maybe some who believe in the doctrine Reformed folks call "the perseverance of the saints" believe that the elect have this sort of certainty. But this merely causal unshakeability may not be suited to an analysis of certainty, which in the context of the Christian faith seems to have an epistemic component: not only is it that the Christian's faith can't (in some sense to be explained shortly) be shaken, but it is appropriate, epistemically and morally, that it not be shaken. Moreover, I do not think we should commit to "the perseverance of the saints" as understood by the Reformed—there are too many warnings in Scripture about the danger of falling away (though I know the Reformed have their own readings of those). Therefore, we should allow for the possibility that x might freely choose to be irrational and stop believing.

The above remarks fit well with the following account (which I am not actually endorsing as a complete story; I like the thought that this account and my previous normative account should be conjoined):

  • x has appropriately unshakeable certainty that p if and only if x believes that p and it is causally necessary that x continue to epistemically obligatorily (or, for what would in practice tend to be a weaker definition: appropriately) believe that p so long as x refrains from gravely immoral actions.
Why "gravely"? Because the certainty would be too fragile if it could be lost by any immoral action.

Notice what appropriate unshakeability does not require. First, it does not require that there be a Bayesian credence of one in p. Nor does it require that there be no metaphysically evidence E such that P(p|E) is small. All that's required is that it be causally necessary either that x not observe E or that should x observe E, something should turn up (by the grace of God, immediate or mediate) that would continue to render it epistemically obligatory (or at least permissible) to believe it. The above story is compatible with multiple explanations for the necessity claim. For instance, maybe, the Holy Spirit enlightens the mind in such a way as to internally provide evidence stronger than any that is possible. Or maybe God is committed to ensuring that one will not meet with any strong counter-evidence. Or maybe one has credence 1 and God will help one remain Bayesian-rational and not change that 1.

Whence the causal necessity? I suppose it would be grounded in the promises of God—it is causally impossible that the promises of God not be fulfilled. Or maybe oeconomic necessity would be better here than causal necessity—however, I think oeconomic necessity is a special case of nomic necessity.

I am grateful to Trent Dougherty for a lot of discussions of certainty, out of which this post flows.

Thursday, July 2, 2009

Life is short

It is a truism that life is short. But this truism is in tension with the very plausible claim that life-long commitments are hard. So which should go?

Or can it be, perhaps, that life-long commitments are hard—for those who have not existentially appropriated the shortness of life? If so, then we would have the following prediction: In societies where individuals are socially isolated from the fact of death, well-kept life-long commitments are more rare (either because life-long commitments are more rare, or because they are kept less well). I suppose this is one of those predictions that it would be really hard to make sufficiently precise to check.

Thursday, August 21, 2008

Intuitions on lying and deception

My intuition that lying is significantly different from some other forms of deception is driven by an intuition I have about speech being special vis-à-vis the virtue of honesty.

Consider: "She told us she is going to go to Cracow, and she is an utterly honest person, so even though we are her enemies, we can rely on her going to some city named Cracow at least at some point in the future." This seems a reasonable thing to say.

But consider: "Her footprints at this intersection lead to Cracow. She is an utterly honest person, so she must be going to Cracow." That is surely mistaken reasoning. It is not a sign of dishonesty that one lays a false trail, unless one has promised (implicitly or explicitly) not to do so.

The tie with promises seems significant to me. An honest person only makes promises that she intends to keep.

Now, let us suppose that George prefaces every assertion with: "I promise that I will now only say something sincere." That would be dreadfully annoying (there are characters in fiction who do this kind of thing). Part of the reason for the annoyance is that it is quite unnecessary. The commitment to speak only sincerely is already there in the assertion that follows the preface.

As our Savior told us, our yeas should be yeas, and our nays, nays. Nothing more is needed, because our yeas and nays already include a commitment to speak sincerely. This commitment is part and parcel of making an assertion rather than musing out-loud, asking a question, making a promise, quoting a line of poetry, etc. Indeed, much or even all of what distinguishes an assertion from other speech acts is precisely this commitment to speak only the truth. (Actors on stage do not make assertions or promises.)

Granted, sometimes we emphatically do promise to speak the truth in some matter. I think that is not a sign that we ordinarily have no such commitment. Rather, the promise is a moral-gravity booster, in the way in which making an oath is a legal-gravity booster (if one speaks falsely under oath, one commits perjury, instead of merely hampering an investigation, etc.) One could similarly boost the moral gravity of ordinary promises by promising to keep the promise. To boost the moral gravity of an obligation is simply to bring it about that it would be a greater offense to go against the obligation.

If I am right that asserting p is normatively equivalent to promising to say only the truth or maybe to say only something one believes and then saying a sentence that expresses p, and if I am right that an honest person does not make promises she does not intend to keep, then an honest person does not lie. But various non-linguistic kinds of deceit involve no commitment, explicit or implicit, for the deceiver to be breaking, and hence under some circumstances will be compatible with honesty.