Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Monday, March 11, 2024

Trust versus prediction

What is the difference between trusting that someone will ϕ and merely predicting their ϕing?

Here are two suggestions that don’t quite pan out.

1. In trusting, you have to have a pro-attitude towards ϕing. But this is false. One can trust a referee will make a fair decision even when one hopes they will make a decision that favors one instead. And you can trust that someone who promised you a punishment will mete it out if you deserve it even if you would rather they didn’t.

2. In trusting, you rely on the person’s ϕing. But this is not always true. A promised benefit might be such that it doesn’t affect any of your actions, but you can still trust you will receive it.

But here is an idea I like. In trusting, you believe that the person will intentionally ϕ as part of her proper functioning, and you believe this on account of the person’s possessing the relevant proper functional disposition. In central cases, “proper functioning” can be replaced with “expression of virtue”, but trust can include non-moral proper function.

A consequence of this account is that it is impossible to trust someone to do wrong, since wrongdoing is never a part of a person’s proper functioning. For trust-based theories of promises, this makes it easy to see why promises to do wrong are null and void: for it makes no sense to solicit trust where trust is impossible.

This account of trust gives a nice extended sense of trust in things other than people. Just drop “intentionally” and “person”. In an extended sense, you can trust a dog, a carabiner, a book, or anything else that has a proper function. This seems right: we certainly do talk of trust in this extended sense.

Monday, November 15, 2021

Trust and scepticism

To avoid scepticism, we need to trust that human epistemic practices and reality match up. This trust is clearly at least a part of a central epistemic virtue.

Now, trusting persons is a virtue, the virtue of faith. But trusting in general, apart from trusting persons, is not. Theism can thus neatly account for how the trusting that is at the heart of human epistemic practices is virtuous: it is an implicit trust in our creator.

Thursday, January 30, 2020

Moles and traitors

Consider two different spies:

  1. Betrayer: Alice is in a position of trust among the enemy. She is then recruited to work against those who trust her.

  2. Mole: Bob is not yet in a position of trust among the enemy, but he is recruited to gain their trust, with a view to eventually working against them.

I assume that the enemy is morally in the wrong, and that just allows our side to work to undermine the enemy. But I think there is a moral problem with moles that betrayers don’t have which we can get at by considering a parallel distinction between two cases of people who promise the wrong thing:

  1. Carl realizes that a promise he made is one that it is morally wrong to keep, and hence he does not keep it.

  2. Danielle makes a promise knowing that it would be morally wrong to keep it, without intending to keep it.

Carl is acting well. He shouldn’t have made the promise, but since a promise of immoral activity is null and void, he rightly refuses to keep the promise. Carl may or may not have been culpable for making the promise, but in neither case should he keep it.

Danielle acts badly. She is insincerely gaining the trust of people. Her action is bad even if she knows that the promise is null and void.

Alice the Betrayer is like Carl. Alice attained a position of trust in a morally corrupt hierarchy. She shouldn’t have signed up for that. But whether she is culpable for that or not, it is right for her to go against that trust. Unjust commitments are null and void. While her fellows may in fact trust her to keep working on their side, they shouldn’t.

However, Bob is working to gain a trust he intends not to keep. This seems morally bad, even though it is a trust that he shouldn’t keep.

Sunday, August 4, 2019

Belief, testimony and trust

Suppose that to believe a proposition is to have a credence in that proposition above some (perhaps contextual) threshold pb where pb is bigger than 1/2 (I think it’s somewhere around 0.95 to 0.98). Then by the results of my previous post, because of the very fast decay of the normal distribution, most propositions with credence above the threshold pb have a credence extremely close to pb.

Now suppose I assert precisely when my credence is above the threshold pb. If you trusted my rationality and honesty perfectly and had no further relevant evidence, it would make sense to set your credences to mine when I tell you something. But normally, we don’t tell each other our credences. We just assert. From the fact that I assert, given perfect trust, you could conclude that my credence is probably very slightly above pb. Thus you would set your credence to slightly above pb, and in particular you would believe the proposition I asserted.

But in practice, we don’t trust each other perfectly. Thus, you might think something like this about my assertion:

If Alex was honest and a good measurer of own credences, his credence was probably a tiny bit above pb, and if I was certain of that, I’d make that be my credence. but he might not have been honest or he might have been self-deceived, in which case his credence could very well be significantly below pb, especially given the fast decay in the distribution of credences, which yields high priors for the credence being significantly below pb.

Since the chance of dishonesty or self-deceit is normally not all that tiny, your overall credence would be below pb. Note that this is the case even for people we take to be decent and careful interlocutors. Thus, in typical circumstances, if we assert at the threshold for belief, even interlocutors who think of us as ordinarily rational and honest shouldn’t believe us.

This seems to me to be an unacceptable consequence. It seems to me that if someone we take to be at least ordinarily rational and honest tells us something, we should believe it, absent defeaters. Given the above argument, it seems that the credential threshold for assertion has to be significantly higher than the credential threshold for belief. In particular, it seems, the belief norm of assertion is insufficiently strong.

Intuitively, the knowledge norm of assertion is strong enough (maybe it’s too strong). If this is right, then it follows that knowledge has a credential threshold significantly above that for belief. Then, if someone asserts, we will think that their credence is just slightly above the threshold for knowledge, and even if we discount that because of worries that even an ordinarily decent person might not be reporting their credence correctly, we will likely stay above the threshold for belief. The conclusion will be that in ordinary circumstances if someone asserts something, we will be able to believe it—but not know it.

I am not happy with this. I would like to be able to say that we can go from another’s assertion to our knowledge, in cases of ordinary degrees of trust. I could just be wrong about that. Maybe I am too credulous.

Here is a way of going beyond this. Perhaps the norms of assertion should be seen not as all-or-nothing, but as more complex:

  1. When the credence is at or below pb, we are forbidden to assert.

  2. When the credence is above pb, but close to pb, we have permission to assert, but we also have a strong defeasible reason not to assert, with the strength of that reason increasing to infinity as we get closer we are to pb.

If someone abides by these, they will be unlikely to assert a proposition whose credence is only slightly above pb, because they will have a strong reason not to. Thus, their asserting in accordance with the norms will give us evidence that their credence is not insignificantly above pb. And hence we will be able to believe, given a decent degree of trust.

Note, however, that the second norm will not apply if there is a qualifier like “I think” or “I believe”. In that case, the earlier argument will still work. Thus, we have this interesting consequence: If someone trustworthy merely says that they believe something, that testimony is still insufficient for our belief. But if they assert it outright, that is sufficient for our belief.

This line of thought arose out of conversations I had with Trent Dougherty a number of years ago and my wife more recently. I don’t know if either would endorse my conclusions, though.

Friday, March 8, 2019

Obligations of friendship

We are said to have various obligations, especially of benevolence, to our friends precisely because they are our friends. Yet this seems mistaken to me if friendship is by definition mutual.

Suppose you and I think we really are friends. We do all the things good friends do together. We think we are friends. And you really exhibited with respect to me all, externally and internally, all the things that good friends exhibit. But one day I realize that the behavior of my heart has not met the minimal constitutive standards for friendship. Perhaps though I had done things to benefit you, they were all done for selfish ends. And thus I was never your friend, and if friendship is mutual, it follows that we weren’t ever friends.

At the same time, I learn that you are in precisely the kind of need that triggers onerous obligations of benevolence in friends. And so I think to myself: “Whew! I thought I would have an obligation to help, but since I was always selfish in the relationship, and not a real friend, I don’t.”

This thought would surely be a further moral corruption. Granted, if I found out that you had never acted towards me as a friend does, but had always been selfish, that might undercut my obligation to you. But it would be very odd to think that finding out that I was selfish would give me permission for further selfishness!

So, I think, in the case above I still would have towards you the kinds of obligations of benevolence that one has towards one’s friends. Therefore, it seems, these obligations do not arise precisely from friendship. The two-sided appearance of friendship coupled with one-sided (on your side) reality is enough to generate these obligations.

Variant case: For years I’ve been pretending to be your friend for the sake of political gain, while you were sincerely doing what a friend does. And now you need my help. Surely I owe it to you!

I am not saying that these sorts of fake friendships give rise to all the obligations normally attributed to friendship. For instance, one of the obligations normally attributed to friendship is to be willing to admit that one is friends with them (Peter violated this obligation when he denied Jesus). But this obligation requires real friendship. Moreover, certain obligations to socialize with one’s friends depend on the friendship being real.

A tempting thought: Even if friendship is mutual, there is a non-mutual relation of “being a friend to”. You can be a friend to someone who isn’t a friend to you. Perhaps in the above cases, my obligation to you arises not from our friendship, which does not exist, but from your being a friend to me. But I think that’s not quite right. For then we could force people to have obligations towards us by being friends to them, and that doesn’t seem right.

Maybe what happens is this. In friendship, we invite our friends’ trust in us. This invitation of trust, rather than the friendship itself, is what gives rise to the obligations of benevolence. And in fake friendships, the invitation of trust—even if insincere—also gives rise to obligations of benevolence.

So, we can say that we have obligations of benevolence to our friends because they are our friends, but not precisely because they are our friends. Rather, the obligations arise from a part of friendship, the invitation of trust, a part that can exist apart from friendship.

Friday, January 13, 2017

Lying, acting and trust

A spy's message to his handler about troop movements is intercepted. The message is then changed to carry the false information that the infantry will be on the move without artillery support and sent onward. Did those who changed the message lie?

To lie, one must assert. But suppose the handler finds out about the change. Could she correctly say: "The counterintelligence operatives asserted to us that the infantry would be on the move without artillery support?" That just seems wrong. In fact, it seems similar to the oddity of attributing to an actor the speech of a character (though with the important difference that the actor does not typically speak to deceive). The point is easiest to see, perhaps, where there are first person pronouns. If part of the message says: "I will be at the old barn at 9 pm", it is surely false that the counterintelligence staff asserted they will be at the old barn (even though, quite possibly, they will--in order to capture the handler), but it also doesn't seem right to say that the counterintelligence staff asserted that the spy will be there.

The trust account of lying, defended by Jorge Garcia and others, seems to fit well with this judgment. On this account, to lie is to solicit trust while betraying it. But one can only betray a trust in oneself. The counterintelligence operatives, however, did not solicit the handler's trust in themselves: rather, they were relying on the handler's trust in the spy, and that trust the operatives cannot betray.

But there are some difficult edge cases. What if a counterintelligence operative dons a mask that makes him look just like the spy, and speaks falsehoods with a voice imitating the spy? But what if a spy goes to a foreign country with an entirely fictional identity? I am inclined to think that on the trust account the two cases are different. When one imitates the spy, one relies on the faith and credit that the spy has, and one isn't soliciting trust for oneself. When one dresses up as someone who doesn't exist, I think one is trying to gain faith and credit for oneself, and it seems one is lying. But I am not sure where the line is to be drawn.

Tuesday, December 29, 2015

Trusting leaders in contexts of war

Two nights ago I had a dream. I was in the military, and we were being deployed, and I suddenly got worried about something like this line of thought (I am filling in some details--it was more inchoate in the dream). I wasn't in a position to figure out on my own whether the particular actions I was going to be commanded to do are morally permissible. And these actions would include killing, and to kill permissibly one needs to be pretty confident that the killing is permissible. Moreover, only the leaders had in their possession sufficient information to make the judgment, so I would have to rely on their judgment. But I didn't actually trust the moral judgment of the leaders, particularly the president. My main reason in the dream for not trusting them was that the president is pro-choice, and someone whose moral judgment is so badly mistaken as to think that killing the unborn is permissible is not to be trusted in moral judgments relating to life and death. As a result, I refused to participate, accepting whatever penalties the military would impose. (I didn't get to find out what these were, as I woke up.)

Upon waking up and thinking this through, I wasn't so impressed by the particular reason for not trusting the leadership. A mistake about the morality of abortion may not be due to a mistake about the ethics of killing, but due to a mistake about the metaphysics of early human development, a mistake that shouldn't affect one's judgments about typical cases of wartime killing.

But the issue generalizes beyond abortion. In a pluralistic society, a random pair of people is likely to differ on many moral issues. The probability of disagreement will be lower when one of the persons is a member of a population that elected the other, but the probability of disagreement is still non-negligible. One worries that a significant percentage of soldiers have moral views that differ from those of the leadership to such a degree that if the soldiers had the same information as the leaders do, the soldiers would come to a different moral evaluation of whether the war and particular lethal acts in it are permissible. So any particular soldier who is legitimately confident of her moral views has reason to worry that she is being commanded things that are impermissible, unless she has good reason to think that her moral views align well with the leaders'. This seems to me to be a quite serious structural problem for military service in a pluralistic society, as well as a serious existential problem.

The particular problem here is not the more familiar one where the individual soldier actually evaluates the situation differently from her leaders. Rather, it arises from a particular way of solving the more familiar problem. Either the soldier has sufficient information by her lights to evaluate the situation or she does not. If she does, and she judges that the war or a lethal action is morally wrong, then of course conscience requires her to refuse, accepting any consequences for herself. Absent sufficient information, she needs to rely on her leaders. But here we have the problem above.

How to solve the problem? I don't know. One possibility is that even though there are wide disparities between moral systems, the particular judgments of these moral systems tend to agree on typical acts. Even though utilitarianism is wrong and Catholic ethics is right, the utilitarian and the Catholic moralist tend to agree about most particular cases that come up. Thus, for a typical action, a Catholic who hears the testimony of a well-informed utilitarian that an action is permissible can infer that the action is probably permissible. But war brings out differences between moral systems in a particularly vivid way. If bombing civilians in Hiroshima and Nagasaki is likely to get the emperor to surrender and save many lives, then the utilitarian is likely to say that the action is permissible while the Catholic will say it's mass murder.

It could, however, be that there are some heuristics that could be used by the soldier. If a war is against a clear aggressor, then perhaps the soldier should just trust the leadership to ensure that the other conditions (besides the justness of the cause) in the ius ad bellum conditions are met. If a lethal action does not result in disproportionate civilian deaths, then there is a good chance that the judgments of various moral systems will agree.

But what about cases where the heuristics don't apply? For instance, suppose that a Christian is ordered to drop a bomb on an area that appears to be primarily civilian, and no information is given. It could be that the leaders have discovered an important military installation in the area that needs to be destroyed, and that this is intelligence that cannot be disclosed to those who will carry out the bombing. But it could also be that the leaders want to terrorize the population into surrender or engage in retribution for enemy acts aimed at civilians. Given that there is a significant probability, even if it does not exceed 1/2, that the action is a case of mass murder rather than an act of just war, is it permissible to engage in the action? I don't know.

Perhaps knowledge of prevailing military ethical and legal doctrine can help in such cases. The Christian may know, for instance, that aiming at civilians is forbidden by that doctrine. In that case, as long as she has enough reason to think that the leadership actually obeys the doctrine, she might be justified in trusting in their judgment. This is, I suppose, an argument for militaries to make clear their ethical doctrines and the integrity of their officers. For if they don't, then there may be cases where too much disobedience of orders is called for.

I also don't know what probability of permissibility is needed for someone to permissibly engage in a killing.

I don't work in military ethics. So I really know very little about the above. It's just an ethical reflection occasioned by a dream...

Monday, November 30, 2015

Lying, violence and dignity

I've argued that typically the person who is hiding Jews from Nazis and is asked by a Nazi if there are Jews in her house tells the truth, and does not lie, if she says: "No." That argument might be wrong, and even if it's right, it probably doesn't apply to all cases.

So, let's think about the case of the Nazi asking Helga if she is hiding Jews, when she is in fact hiding Jews, and when it would be a lie for her to say "No" (i.e., when there isn't the sort of disparity of languages that I argued is normally present). The Christian tradition has typically held lying to be always wrong, including thus in cases like this. I want to say some things to make it a bit more palatable that Helga does the right thing by refusing to lie.

The Nazi is a fellow human being. Language, and the trust that underwrites it (I was reading this morning that one of the most difficult questions in the origins of language is about the origination of the trust essential to language's usefulness), is central to our humanity. By refusing to betray the Nazi's trust in her through lying, Helga is affirming the dignity of all humans in the particular case of someone who needs it greatly--a human being who has been dehumanized by his own choices and the influence of an inhuman ideology. By attempting to dehumanize Jews, the Nazi dehumanized himself to a much greater extent. Refusing to lie, Helga gives her witness to a tattered child of God, a being created to know and live by the truth in a community of trust, and she he gives him a choice whether to embrace that community of trust or persevere on the road of self-destruction through alienation from what is centrally human. She does this by treating him as a trusting human rather than a machine to be manipulated. She does this in sadness, knowing that it is very likely that her gift of community will be refused, and will result in her own death and the deaths of those she is protecting. In so doing she upholds the dignity of everyone.

When I think about this in this way, I think of the sorts of things Christian pacifists say about their eschatological witness. But while I do embrace the idea that we should never lie, I do not embrace the pacifist rejection of violence. For I think that just violence can uphold the dignity of those we do violence to, in a way in which lying cannot. Just violence--even of an intentionally lethal sort--can accept the dignity of an evildoer as someone who has chosen a path that is wrong. We have failed to sway him by persuasion, but we treat him as a fellow member of the human community by violently preventing him from destroying the community that his own wellbeing is tied to, rather than by betraying with a lie the shattered remains of the trustful connection he has to that community.

I don't think the above is sufficient as an argument that lying is always wrong. But I think it gives some plausibility to that claim.

Monday, October 19, 2015

Being trusting

This is a followup on the preceding post.

1. Whenever the rational credence of p is 0.5 on some evidence base E, at least 50% of human agents who assign a credence to p on E will assign a credence between 0.25 and 0.75.

2. The log-odds of the credence assigned by human agents given an evidence base can be appropriately modeled by the log-odds of the rational credence on that evidence base plus a normally distributed error whose standard deviation is small enough to guarantee the truth of 1.

3. Therefore, if I have no evidence about a proposition p other than that some agent assigned credence r on her evidence base, I should assign a credence at least as far from 0.5 as F(r), where:

  • F(0.5) = 0.5
  • F(0.6) = 0.57
  • F(0.7) = 0.64
  • F(0.8) = 0.72
  • F(0.9) = 0.82
  • F(0.95) = 0.89
  • F(0.98) = 0.95
  • F(0.99) = 0.97

4. This is a pretty trusting attitude.

5. So, it is rational to be pretty trusting.

The trick behind the argument is to note that (1) and (2) guarantee that the standard deviation of the normally distributed error on the log-odds is less than 1.63, and then we just do some numerical integration (with Derive) to compute the expected value of the rational credence.

A puzzle about testimony

You weigh a bag of marbles on a scale that you have no information about the accuracy of, and the scale says that the bag weighs 342 grams. If you have no background information about the bag of marbles, your best estimate of the weight of the bag is 342 grams. It would be confused to say: "I should discount for the unreliability of the scale and take my best estimate to be 300 grams." For if one has no information about the scale's accuracy, one should not assume that the scale is more likely to overestimate than to underestimate by a given amount. So far so good. Now, suppose that instead of your using the scale, you give me the bag, I hold it in my hand, and say: "That feels like 340 grams." Again, your best estimate of the weight will now be 340 grams. You don't know whether I am apt to overestimate or underestimate, so it's reasonable to just go with what I said.

But now consider a different case. You have no background information about my epistemic reliability and you have no evidence regarding a proposition p, but I inform you that I have some relevant evidence and I estimate the weight of that evidence at 0.8. It seems that the same argument as before should make you estimate the weight of the evidence available to me at 0.8. But that's all the evidence available right now to either of us, so you should thus assign a credence of 0.8 to p. But the puzzle is that this is surely much too trusting. Given no information about my reliability, you would surely discount, maybe assigning a credence of 0.55 (but probably not much less). Yet, doesn't the previous argument go through? I could be overestimating the weight of the evidence. But I could also be underestimating it. By discounting the probability, you are overestimating the probability of the denial of p, and that's bad.

There is, however, a difference between the weight of evidence and the weight of marbles. The weight of marbles can be any positive real number. And if we take really seriously the claim that there is no background information about the marbles, it could be a negative number as well. So we can reasonably say that I or the scale could equally be mistaken in the upward or the downward direction. However, if we know anything about probabilities, we know that they range between 0 and 1. So my estimate of 0.8 has more possibilities of being an overestimate than of being an underestimate. It could, for instance, be too high by 0.3, with the correct estimate of the weight of my evidence being 0.5, but it couldn't be too low by 0.3 for then the correct estimate would be 1.1. We can, thus, block the puzzling argument for trust. Though that doesn't mean the conclusion of the argument is wrong.

Wednesday, December 31, 2014

Trust in the virtuous and the morality of lying

Helga is well known to be perfectly virtuous. Her best friend Kurt is accused of conspiring to peacefully overthrow a tyrannical government, and will be tortured to death and executed unless Helga can convince government agents that he did no such thing. As a matter of fact, Helga has conclusive first-person evidence that Kurt did no such thing, much as that government deserves to be overthrown.

Suppose that it is sometimes permissible to lie. Then surely lying to save a peaceful conspirator against a tyrannical government would be a paradigm case of permissible lying, and indeed of obligatory lying. Thus, if Helga testifies to government agents that Kurt is not a conspirator, she is doing precisely what a perfectly virtuous person would do if Kurt were a conspirator. Thus if she is known to be perfectly virtuous, her testimony to Kurt's not being a conspirator is unworthy of credence. In fact, her testimony would be more worthy of credence if she were somewhat less virtuous and rigidly opposed to lying in all cases. Thus the permissibility of lying would make the testimony of the virtuous be worthless in a number of high-stakes cases.

But the virtuous are precisely the people whose testimony should carry weight, they are precisely the trustworthy, at least in cases where they are in a position to know whereof they speak. So the conclusion that Helga's truthful testimony about Kurt is worthless is paradoxical. And this paradox gives one reason to reject the premise from which the paradox was derived, namely that it is sometimes permissible to lie.

Tuesday, July 15, 2014

Trust and the prisoner's dilemma

This is pretty obvious, but I never quite thought of it in those terms: The prisoners' dilemma shows the need for the virtue of trust (or faith, in a non-theological sense). In the absence of contrary evidence, we should assume others to act well, to cooperate.

This assumption perhaps cannot be justified epistemically non-circularly, at least not without adverting to theism, since too much of our knowledge rests on the testimony of others, and hence is justified by trust. Our own observations simply are not sufficient to tell us that others are trustworthy. There is too much of a chance that people are betraying us behind our backs, and it is only by relying on theism, the testimony of others, or directly on trust, that we can conclude that this is not so.

It seems to me that the only way out of the circle of trust would be an argument for the existence of a perfect being (or for some similar thesis, like axiarchism) that does not depend on trust, so that I can then conclude that people created by a perfect being are likely to be trustworthy. But perhaps every argument rests on trust, if only a trust in our own faculties?

Saturday, November 16, 2013

Why faith in the testimony of others is loving: Notes towards a thoroughly ethical social epistemology

Loving someone has three aspects: the benevolent, the unitive and the appreciative. (I develop this early on in One Body.) Believing something and gaining knowledge on the testimony of another teaching involves all three aspects of love.

Appreciation: If I believe you on testimony, then I accept you as a person who speaks honestly and reasons well. It is a way of respecting your epistemic achievement. This does not mean that a failure to accept your testimony is always unappreciative. I may appreciate you, but have good reason to think that the information you have received is less complete than mine.

Union: Humans are social animals, and our sociality is partly constituted by our joint epistemic lives. To accept your testimony is to be united with you epistemically.

Benevolence: Excelling at our common life of learning from and teaching one another is a part of our flourishing. If I gain knowledge from you, you thereby flourish as my teacher. Thus by learning from you, I benefit not only myself as learner but I benefit you by making you a successful teacher.

We learn from John Paul II's philosophical anthropology that we are essentially givers and accepters of gifts. In giving, epistemically and otherwise, we are obviously benevolent, but also because it is the human nature to be givers, in grateful acceptance of a gift we benefit, unite with and affirm the giver, thereby expressing all three aspects of love.

Monday, August 19, 2013

Trust and lies

You promise to meet me for dinner at 7. We say that the promise normally makes it appropriate for me to trust you'll show up at 7. But that's not quite right. What is more appropriate to trust is that you'll meet me for dinner at 7 or have good moral reason not to be there. This point applies even if I know that you won't have such good moral reason. For that you won't isn't s matter of trust of you, but of prediction.
By the same token, if it can ever be permissible to lie, and you assert something, I never ought to trust you that you are being truthful. Instead at most I ought to trust that you either are being truthful or have good moral reason to lie.
So if it is ever appropriate to take it on trust alone that you are being truthful, lying is always wrong.

Saturday, April 7, 2012

The improbable and the impossible

This discussion from Douglas Adams' The Long Dark Tea-Time of the Soul (pp. 165-166) struck me as quite interesting:

[Kate:] "What was the Sherlock Holmes principle? 'Once you have discounted the impossible, then whatever remains, however improbable, must be the truth.'"
"I reject that entirely," said Dirk sharply. "The impossible often has a kind of integrity to it which the merely impossible lacks. How often have you been presented with an apparently rational explanation of something that works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, 'Yes, but he or she simply wouldn't do that.'"
"Well, it happened to me today, in fact," replies Kate.
"Ah, yes," said Dirk, slapping the table and making the glasses jump, "your girl in the wheelchair [the girl was constantly mumbling exact stock prices, with a 24-hour delay]--a perfect example. The idea that she is somehow receiving yesterday's stock market prices out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don't know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. ..."

This reminds me very much of the Professor's speech in The Lion, the Witch and the Wardrobe:

Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious that she is not mad. For the moment and unless any further evidence turns up, we must assume that she is telling the truth.

Both Dirk Gently and the Professor think that we need to have significantly greater confidence in what we know about other people's character than in our scientific knowledge of how the non-human world works. This seems to me to be just right. Our scientific knowledge of the world almost entirely depends on trusting others.

So, both C. S. Lewis and Douglas Adams are defending faith in Christ, though of course Adams presumably unintentionally. :-)

Saturday, February 25, 2012

Reasons of trust

Suppose you promise me to do something and suppose I should trust you. Then I have a moral reason not to check whether you did what your promised. Of course, if I have a special responsibility for it, I may also have a moral reason to check. But generally speaking, I think we have an imperfect duty not to check up on people when we should trust them. Moreover, we should trust people unless we have good reason to the contrary. I would be wronging a colleague if, out of the blue, I were to start running his papers through TurnItIn.com to look for plagiarism. Such an action would be a failure to show required trust. It would thus be contrary to collegial love.

Natural love, thus, requires natural faith of us. But our supernatural love for Christ requires supernatural faith of us.

Tuesday, February 7, 2012

Inferring an "is" from an "ought"

You tell me that you saw a beautiful sunset last night. I conclude that you saw a beautiful sunset last night. You are talking about Mother Teresa. I conclude that you won't say that she was a sneaky politician. You promise to bake a pie for the party tomorrow. I conclude that you will bake a pie for the party tomorrow or you will have a good reason for not doing so. I tell a graduate student to read a page of Kant for next class. I conclude that she will read a page of Kant for next class or will have a good reason for not doing so.

All of these are inferences of an "is" from an "ought". You ought to refrain from telling me you saw a beautiful sunset last night, unless of course you did see one. You ought not say that Mother Teresa was a sneaky politician, as she was not. You ought not fail to bake the promised cake, unless you have good reason. The student ought not fail to read the Kant, unless she has good reason.

All of these are of a piece. We have prima facie reason to conclude from the fact that something ought to be so that it is so. In particular, belief on testimony is a special case of the is-from-ought inference.

In a fallen world, all of these inferences are highly defeasible. But defeasible or not, they carry weight. And there is a virtue—both moral and intellectual—that is exercised in giving these inferences their due weight. We might call this virtue (natural) faith or appropriate trust. We also use the term "charity" to cover many of the cases of the exercise of this virtue: To interpret others' actions in such ways as make them not be counterinstances to the is-from-ought inference is to charitably interpret them, and we have defeasible reason to do so.

The inference may generalize outside the sphere of human behavior. A sheep ought to have four legs. Sally is a sheep. So (defeasibly) Sally has four legs.

I used to think that testimony was epistemically irreducible. I am now inclined to think it is reducible to the is-from-ought inference. Seeing it as of a piece with other is-from-ought inferences is helpful in handling testimonial-like evidence that is not quite testimony. For instance, hints are not testimony strictly speaking, but an inference from a hint is relevantly like an inference from testimony. We can say that an inference from a hint is a case of an is-from-ought inference, but a weaker one because the "ought" in the case of a hint is ceteris paribus weaker than the "ought" in the case of assertion. Likewise, inference from an endorsement of a person to the person's worthiness of the endorsement is like inference from testimony, but endorsement of a person is not the same as testimony (I can testify that a person is wonderful without endorsing the person, and I can endorse a person without any further testimony). Again, inference from endorsement is a special case of is-from-ought: one ought not endorse those who are not worthy of endorsement.

If is-from-ought is a good form of inference, the contraposition may-from-is will also be a good form of inference. If someone is doing something, we have reason to think she is permitted to do it. Of course, there are many, many defeaters.

It is an interesting question whether the is-from-ought inference is at all plausible apart from a view like theism or Plato's Platonism on which the world is ultimately explanatorily governed by values. There may be an argument for theism (or Plato's Platonism!) here.

Tuesday, November 1, 2011

When should you adopt an expert's opinion over your own?

Consider two different methods for what to do with the opinion of someone more expert than yourself, on a matter where both you and the expert have an opinion.

Adopt: When the expert's opinion differs from yours, adopt the expert's opinion.

Caution: When the expert's opinion differs from yours, suspend judgment.

To model the situation, we need to assign some epistemic utilities.  The following are reasonable given that the disvalue of a false opinion is significantly worse than the value of a true belief, at least by a factor of ~2.346 in the case of confidence level 0.95, according to the hate-love ratio inequality.
  • Utility of having a true opinion: +1
  • Utility of having a false opinion: approximately -2.346
  • Utility of suspending judgment: 0
Given these epistemic utilities, we can do some quick calculations.  Suppose for simplicity that you're perfect at identifying the expert as an expert (surprisingly, replacing this by a 0.95 confidence level makes almost no difference).  Suppose the expert's level of expertise is 0.95, i.e., the expert has probability 0.95 of getting the right answer.  Then it turns out that Adopt is the better method when your level of expertise is below 0.89, while Caution is the better method when your level of expertise is above 0.89.  Approximately speaking, Adopt is the better method when you're more than about twice as likely to be wrong as the expert; otherwise, Caution is the better method.

In general, Adopt is the better method when your level of expertise is less than e/(D-e(D-1)), where e is the expert's level of expertise and D is the disutility of having a false opinion (which should be at least 2.346 for opinions at confidence level 0.95).  If your level of expertise is higher than that, Caution is the better method.

Here is a graph (from Wolfram Alpha) of the level of expertise you need to have (y-axis), versus the expert's level of expertise (x-axis), in order for adopting Caution rather than Adopt to be epistemic-decision-theory rational, where D=2.346.

Here is a further interesting result. If you set the utility of a false opinion to -1, which makes things more symmetric but leads to an improper scoring rule (with undesirable results like here), then it turns out that Adopt is better than Caution whenever your level of expertise is lower than the expert's. But for any utility of false opinion that's smaller than -1, it will be better to adopt Caution when the gap in level of expertise is sufficiently small.
If you want to play with this stuff, I have a Derive worksheet with this. But I suspect that there aren't many Derive users any more.

Monday, October 24, 2011

"I know my Redeemer lives"

It is a conceit of modern secular society that faith is belief in the absence of evidence or knowledge. That is not how Scripture sees faith. The New Testament constantly talks of us knowing God, knowing the grace of Jesus Christ, and knowing all sorts of things that are the content of faith. In Scripture, faith and knowledge are quite compatible. What may not be compatible is faith and vision, or direct apprehension of the truth. In fact, I think the way to right distinction in a Christian context is between knowing naturally and knowing by faith: but both are species of knowledge.

Aristotle in the Rhetoric defines "pistis" ("faith") as a persuasion by means of the character of the speaker. In the New Testament, "faith" has two aspects: there is the aspect of entrusting oneself to Christ and the aspect of believing. The belief aspect fits very well with what Aristotle says: what we believe by faith is that which we believe on the basis of the perfect character of God.

Belief on the basis of another's character can certainly be knowledge. A friend tells me something. She's got the sort of character that I can't imagine her saying it unless she knew it. I believe her. That's "faith", but it's also a species of knowledge.

Tuesday, October 18, 2011

Credulity

I'm going to offer three arguments for a conclusion I found quite counterintuitive when I got to it, and which I still find counterintuitive, but I can't get out of the arguments for it.

Argument 1. There is a game being played in my sight. The player chooses some value (e.g., a number, a pair of numbers, etc.) and gets a payoff that is a function of the value she chose and some facts that I have no information whatsoever about. Moreover, the payoff function is the same for each player, and the facts don't change between players. I see Jones playing and choosing some value v. I don't get to see what payoff Jones gets. What value should I choose? I think there is a very good case that I should choose v, just as Jones did. After all, I know that I have no information about the unknown facts, but for all I know, Jones knows something more about them than I do (if that's not true, then I do know something about the unknown facts, namely that Jones doesn't know anything about them).

Now, suppose that the game is the game of assigning credences (whether these be point values, intervals, fuzzy intervals, etc.) to a proposition p, and that the payoff function is the right epistemic utility function measuring how close one's credence is to the actual truth value of p. If I should maximize epistemic utility, I get the conclusion that if I know nothing about p other than that you assign to it a credence r, then I should assign to it credence r. Note: I will assume throughout this post that the credences we are talking about are neither 0 or nor 1—there are some exceptional edge effects in the case of those extreme credences, such as that Bayesian information won't shift us out of them (we might have special worries about irreversible decisions, which may trump the above argument).

I find this result quite counterintuitive. My own intuition is that when I know nothing about p other than the credence you assign to p, I should assign to p a downgrade of your credence—I should shift your credence closer to 1/2. But contradicts the conclusion I draw from the above argument.

I can get to the more intuitive result if I have reason to think Jones is less risk averse than I am. In the case of many reasonable epistemic utility measures, risk averseness will push one towards 1/2. So perhaps my intuition that you should downgrade the other's credence, that you should not epistemically trust the other as you trust yourself, comes from an intuition that I am more epistemically risk averse than others. But, really, I have little reason to think that I am more epistemically risk averse than others (though I do have reason to think that I am more non-epistemically risk averse than others).

Argument 2: Suppose I have no information about some quantity Q (say, the number of hairs you've got, the gravitational constant, etc.) other than that Jones' best estimate for Q is r. What should my best estimate for Q be? Surely r. But now suppose I have no information about a proposition p, except that Jones' best estimate for how well p is supported by her evidence is r. Then my best estimate for how well p is supported by Jones' evidence is r. And since I have no evidence to add to the pot, and since my credence should match evidential support (barring some additional moral or pragmatic considerations, which I don't have reason to think apply, since I have no additional information about p), I should have credence r. (Again, it doesn't matter if credences are points or intervals vel caetera.)

Let me make a part of my thinking more explicit. If I have no further information on Q, which Jones estimates to be r, it is equally likely that Jones is under-estimating Q as that Jones is over-estimating Q, so even if I don't trust Jones very much, unless I have specific information that Jones is likely to over-estimate or under-estimate, I should take Q as my best estimate. If Q is the degree to which p is supported by Jones' evidence, then the thought is that Jones might over-estimate this (epistemic incautiousness) or Jones might under-estimate it (undue epistemic caution). Here the assumption that we're not working with extreme credences comes in, since, say, if Jones assigns 1, she can't be under-estimating.

Argument 3: This is the argument that got me started on this line of thought. Imagine two scenarios.
Scenario 1: I have partial amnesia—I forget all information relevant to the proposition p, including information as to how reliable I am in judgments of the p sort. And I don't gain any new evidence. But I do find a notebook where I wrote that I assign credence r to p. I am certain the notebook is accurate as to what credence I assigned. What credence should I assign?
Scenario 2: Same as Scenario 1, except that the notebook lists Jones' credence r for p, not my credence. And I have no information on Jones' reliability, etc.

In Scenario 1, I should assign credence r to p. After all, I shouldn't downgrade (I assume upgrading is out of the question) credences that are stored in my memory, or else all my credences will have an implausible downward slide absent new evidence, and it shouldn't matter whether the credence is stored in memory or on paper.

But I should do in Scenario 2 exactly what I would do in Scenario 1. After all, barring information about reliability, why take my past self to be any more reliable than Jones? So, in Scenario 2, I should assign credence r, too. But the partial amnesia is doing no work in Scenario 2 other than ensuring I have no other information about p. So, given no other information about p, I should assign the same credence as Jones.

Final off-the-cuff remark: I am inclined to take this as a way of loving one's neighbor as oneself.[note 1]