Showing posts with label disagreement. Show all posts
Showing posts with label disagreement. Show all posts

Monday, July 31, 2023

Values of disagreement

We live in a deeply epistemically divided society, with lots of different views, including on some of the most important things.

Say that two people disagree significantly on a proposition if one believes it and one disbelieves it. The deep epistemic division in society includes significant disagreement on many important propositions. But whenever two people significantly disagree on a proposition, one of them is wrong. Being wrong about an important proposition is a very bad thing. So the deep division implies some very bad stuff.

Nonetheless, I’ve been thinking that our deep social disagreement leads to some important advantages as well. Here are three that come to mind:

  1. If two people significantly disagree on a proposition, then by bivalence, one of them is right. There is a value in someone getting a matter right, rather than everyone getting it wrong or suspending judgment.

  2. Given our deep-seated psychological desire to convince others that we’re right, if others disagree with us, we will continue seeking evidence in order to convince them. Thus disagreement keeps us investigating, which is beneficial whether or not we are right. If everyone agreed with us, we would be apt to stop investigating, which would mean we would either get us stuck with a falsehood, or at least likely provide us with less evidence of the truth than is available. Moreover, continued investigation is apt to refine our theory, even if the theory was already basically right.

  3. To avoid getting stuck in local maxima in our search for the best theory, it is good if people are searching in very different areas of epistemic space. Disagreement helps make that happen.

Tuesday, February 19, 2019

Conciliationism and natural law epistemology

Suppose we have a group of perfect Bayesian agents with the same evidence who nonetheless disagree. By definition of “perfect Bayesian agent”, the disagreement must be rooted in differences in priors between these peers. Here is a natural-sounding recipe for conciliating their disagreement: the agents go back to their priors, they replace their priors by the arithmetic average of the priors within the group, and then they re-updated on all the evidence that they had previous got. (And in so doing, they lose their status as perfect Bayesian agents, since this procedure is not a Bayesian update.)

Since the average of consistent probability functions is a consistent probability function, we maintain consistency. Moreover, the recipe is a conciliation in the following sense: whenever the agents previously all agreed on some posterior, they still agree on it after the procedure, and with the same credence as before. Whenever the agents disagreed on something, they now agree, and their new credence is strictly between the lowest and highest posteriors that the group assigned prior to conciliation.

Here is a theory that can give a justification for this natural-sounding procedure. Start with natural law Bayesianism which is an Aristotelian theory that holds that human nature sets constraints on what priors count as natural to human beings. Thus, just as it is unnatural for a human being to be ten feet tall, it is unnatural for a human being to have a prior of 10−100 for there being mathematically elegant laws of nature. And just as there is a range of heights that is natural for a mature human being, there is a range of priors that is natural for the proposition that there are mathematically elegant laws.

Aristotelian natures, however, are connected with the actual propensities of the beings that have them. Thus, humans have a propensity to develop a natural height. Because of this propensity, an average height is likely to be a natural height. More generally, for any numerical attribute governed by a nature of kind K, the average value of that attribute amongst the Ks is likely to be within the natural range. Likely, but not certain. It is possible, for instance, to have a species whose average weight is too high or too low. But it’s unlikely.

Consequently, we would expect that if we average the values of the prior for a given proposition q over the human population, the average would be within the natural range for that prior. Moreover, as the size of a group increases, we expect the average value of an attribute over the group to approach the average value the attribute has in the full population. Then, if I am a member of the group of disagreeing evidence-sharing Bayesians, it is more likely that the average of the priors for q amongst the members of the group lies within the natural human range for that prior for q than it is that my own prior for q lies within the natural human range for q. It is more likely that I have an unnatural height or weight than that the average in a larger group is outside the natural range for height or weight.

Thus, the prior-averaging recipe is likely to replace priors that are defectively outside the normal human range with priors within the normal human range. And that’s to the good rationally speaking, because on a natural law epistemology, the rational way for humans to reason is the same as the normal way for humans to reason.

It’s an interesting question how this procedure compares to the procedure of simply averaging the posteriors. Philosophically, there does not seem to be a good justification of the latter. It turns out, however, that typically the two procedures give the same result. For instance, I had my computer randomly generate 100,000 pairs of four-point prior probability spaces, and compare the result of prior- to posterior-averaging. The average of the absolute value of the difference in the outputs was 0.028. So the intuitive, but philosophically unjustified, averaging of posteriors is close to what I think is the more principled averaging of priors.

The procedure also has an obvious generalization from the case where the agents share the same evidence to the case where they do not. What’s needed is for the agents to make a collective list of all their evidence, replace their priors by averaged priors, and then update on all the items in the collective list.

Tuesday, September 4, 2018

Conciliationism with and without peerhood

Conciliationists say that when you meet an epistemic peer who disagrees with you, you should alter your credence towards theirs. While there are counterexamples to conciliationism here is a simple argument that normally something like conciliationism is correct without the assumption of epistemic peerhood:

  1. That someone’s credence in a proposition p is significantly below 1/2 is normally evidence against p.

  2. Learning evidence against a proposition typically should lower one’s credence.

  3. So, normally, learning that someone’s credence is significantly below 1/2 should lower one’s credence.

In particular, if your credence is above 1/2, then learning that someone else’s is significantly below 1/2 should normally lower one’s credence. And there are no assumptions of peerhood here.

The crucial premise is (1). Here is a simple thought: Normally, people’s credences are responsive to evidence. So when their credence is low, that’s likely because they had evidence against a proposition. Now the evidence they had either is or is not evidence you also have. If you know it is not evidence you also have, then learning that they have additional evidence against the proposition should normally provide you with evidence against it, too. If it is evidence you also have, that evidence should normally make no difference. You don’t know which of these is the case, but still the overall force of evidence is against the proposition.

One might, however, have a worry. Perhaps while normally learning that someone’s credence is significantly below 1/2 should lower one’s credence, when that someone is an epistemic peer and hence shares the same evidence, it shouldn’t. But actually the argument of the preceding paragraph shows that as long as you assign a non-zero probability to the person having more evidence, their disagreement should lead you to lower your credence. So the worry only comes up when you are sure that the person is a peer. It would, I think, be counterintuitive to think you should normally conciliate but not when you are sure the other person is a peer.

And I think even in the case where you know for sure that the other person has the same evidence you should lower your credence. There are two possibilities about the other person. Either they are a good evaluator of evidence or not. If not, then their evaluation of the evidence is normally no evidence either for or against the proposition. But if they are good evaluators, then their evaluating the evidence as being against the proposition normally is evidence that the evidence is against the proposition, and hence is evidence that you evaluated badly. So unless you are sure that they are a bad evaluator of evidence, you normally should conciliate.

And if you are sure they are a bad evaluator of evidence, well then, since you’re a peer, you are a bad evaluator, too. And the epistemology of what to do when you know you’re bad at evaluating evidence is hairy.

Here's another super-quick argument: Agreement normally confirms one's beliefs; hence, normally, disagreement disconfirms them.

Why do I need the "normally" in all these claims? Well, we can imagine situations where you have evidence that if the other person disbelieves p, then p is true. Moreover, there may be cases where your credence for p is 1.

Friday, April 6, 2018

Peer disagreement and models of error

You and I are epistemic peers and we calculate a 15% tip on a very expensive restaurant bill for a very large party. As shared background information, add that calculation mistakes for you and me are pretty much random rather than systematic. As I am calculating, I get a nagging feeling of lack of confidence in my calculation, which results in $435.51, and I assign a credence of 0.3 to that being the tip. You then tell me that you you’re not sure what the answer is, but that you assign a credence of 0.2 to its being $435.51.

I now think to myself. No doubt you had a similar kind of nagging lack of confidence to mine, but your confidence in the end was lower. So if all each of us had was their own individual calculation, we’d each have good reason to doubt that the tip is $435.51. But it would be unlikely that we would both make the same kind of mistake, given that our mistakes are random. So, the best explanation of why we both got $435.51 is that we didn’t make a mistake, and I now believe that $435.51 is right. (This story works better with larger numbers, as there are more possible randomly erroneous outputs, which is why the example uses a large bill.)

Hence, your lower reported credence of 0.2 not only did not push me down from my credence of 0.3, but it pushed me all the way up into the belief range.

Here’s the moral of the story: When faced with disagreement, instead of moving closer to the other person’s credence, we should formulate (perhaps implicitly) a model of the sources of error, and apply standard methods of reasoning based on that model and the evidence of the other’s credence. In the case at hand, the model was that error tends to be random, and hence it is very unlikely that an error would result in the particular number that was reported.

Thursday, October 19, 2017

Conciliationism is false or trivial

Suppose you and I are adding up a column of expenses, but our only interest is the last digit for some reason. You and I know that we are epistemic peers. We’ve both just calculated the last digit, and a Carl asks: Is the last digit a one? You and I speak up at the same time. You say: “Probably not; my credence that it’s a one is 0.27.” I say: “Very likely; my credence that it’s a one is 0.99.”

Concialiationists now seem to say that I should lower my credence and you should raise yours.

But now suppose that you determine the credence for the last digit as follows: You do the addition three times, each time knowing that you have an independent 1/10 chance of error. Then you assign your credence as the result of a Bayesian calculation with equal priors over all ten options for the last digit. And since I’m your epistemic peer, I do it the same way. Moreover, while we’re poor at adding digits, we’re really good at Bayesianism—maybe we’ve just memorized a lot of Bayes’ factor related tables. So we don’t make mistakes in Bayesian calculations, but we do at addition.

Now I can reverse engineer your answer. If you say your credence in a one is 0.27, then I know that of your three calculations, one of them must have been a one. For if none of your calculations was a one, your credence that the digit was a one would have been very low and if two of your calculations yielded a one, your credence would have been quite high. There are now two options: either you came up with three different answers, or you had a one and then two answers that were the same. In the latter case, it turns out that your credence in a one would have been fairly low, around 0.08. So it must be that your calculations yielded a one, and then two other numbers.

And you can reverse engineer my answer. The only way my credence could be as high as 0.99 is if all three of my calculations yielded a one. So now we both know that my calculations were 1, 1, 1 and yours were 1, x, y where 1, x, y are all distinct. So now you aggregate this data, and I do the same as your peer. We have six calculations yielding 1, 1, 1, 1, x, y. A Bayesian analysis, given the fact that the chance of error in each calculation is 0.9, yields a posterior probability of 0.997.

So, your credence did go up. But mine went up too. Thus we can have cases where the aggregation of a high credence with a low credence results in an even higher credence.

Of course, you may say that the case is a cheat. You and I are not epistemic peers, because we don’t have the same evidence: you have the evidence of your calculations and I have the evidence of mine. But if this counts as a difference of evidence, then the standard example conciliationists give, that of different people splitting a bill in a restaurant, is also not a case of epistemic peerhood. And if the results of internal calculations count as evidence for purposes of peerhood, then there just can’t be any peers who disagree, and conciliationism is trivial.

Tuesday, January 26, 2016

Conciliation and caution

I assign a credence 0.75 to p and I find out that you assign credence 0.72 to it, despite us both having the same evidence and epistemic prowess. According to conciliationism, I should lower my credence and you should raise yours.

Here's an interesting case. When I assigned 0.75 to p, I reasoned as follows: my evidence prima facie supported p to a high degree, say 0.90, but I know that I could have made a mistake in my evaluation of the evidence, so to be safe I lowered my credence to 0.75. You, being my peer and hence equally intellectually humble, proceeded similarly. You evaluated the evidence at 0.87 and then lowered the credence to 0.72 to be safe. Now when I learn that your credence is 0.72, I assume you were likewise being humbly cautious. So I assume you had some initial higher evaluation, but then lowered your evaluation to be on the safe side. But now that I know that both you and I evaluated the evidence significantly in favor of p, there is no justification for as much caution. As a result, I raise my credence. And maybe you proceed similarly. And if we're both advocates of the equal weight view, thinking that we should treat each others' credences on par, we will both raise our credence to the same value, say 0.80. As a result, you revise in the direction conciliationism tells you to (but further than most conciliationists would allow) and I revise in the opposite direction to what conciliationism says.

The case appears to be a counterexample to conciliationism. Now, one might argue that I was unfair to conciliationists. It's not uncommon in the literature to define conciliationism as simply the view that both need to change credence rather than the view that they must each change in the direction of the other's credence. And in my example, both change their credence. I think this reading of conciliationism isn't fair to the motivating intuitions or the etymology. Someone who, upon finding out about a disagreement, always changes her credence in the opposite direction of the other's credence is surely far from being a conciliatory person! Be that as it may, I suspect that counterexamples like the above can be tweaked. For instance, I might reasonably reason as follows:

You assign a smaller credence than I, though it's pretty close to mine. Maybe you started with an initial estimate close to but lower than mine and then lowered it by the same amount as I did out of caution. Since your initial estimate was lower than mine, I will lower mine a little. But since it was close, I don't need to be as cautious.
It seems easy to imagine a case like this where the two effects cancel out, and I'm left with the same credence I started with. The result is a counterexample to a conciliationism that merely says I shouldn't stay pat.

Tuesday, December 29, 2015

Trusting leaders in contexts of war

Two nights ago I had a dream. I was in the military, and we were being deployed, and I suddenly got worried about something like this line of thought (I am filling in some details--it was more inchoate in the dream). I wasn't in a position to figure out on my own whether the particular actions I was going to be commanded to do are morally permissible. And these actions would include killing, and to kill permissibly one needs to be pretty confident that the killing is permissible. Moreover, only the leaders had in their possession sufficient information to make the judgment, so I would have to rely on their judgment. But I didn't actually trust the moral judgment of the leaders, particularly the president. My main reason in the dream for not trusting them was that the president is pro-choice, and someone whose moral judgment is so badly mistaken as to think that killing the unborn is permissible is not to be trusted in moral judgments relating to life and death. As a result, I refused to participate, accepting whatever penalties the military would impose. (I didn't get to find out what these were, as I woke up.)

Upon waking up and thinking this through, I wasn't so impressed by the particular reason for not trusting the leadership. A mistake about the morality of abortion may not be due to a mistake about the ethics of killing, but due to a mistake about the metaphysics of early human development, a mistake that shouldn't affect one's judgments about typical cases of wartime killing.

But the issue generalizes beyond abortion. In a pluralistic society, a random pair of people is likely to differ on many moral issues. The probability of disagreement will be lower when one of the persons is a member of a population that elected the other, but the probability of disagreement is still non-negligible. One worries that a significant percentage of soldiers have moral views that differ from those of the leadership to such a degree that if the soldiers had the same information as the leaders do, the soldiers would come to a different moral evaluation of whether the war and particular lethal acts in it are permissible. So any particular soldier who is legitimately confident of her moral views has reason to worry that she is being commanded things that are impermissible, unless she has good reason to think that her moral views align well with the leaders'. This seems to me to be a quite serious structural problem for military service in a pluralistic society, as well as a serious existential problem.

The particular problem here is not the more familiar one where the individual soldier actually evaluates the situation differently from her leaders. Rather, it arises from a particular way of solving the more familiar problem. Either the soldier has sufficient information by her lights to evaluate the situation or she does not. If she does, and she judges that the war or a lethal action is morally wrong, then of course conscience requires her to refuse, accepting any consequences for herself. Absent sufficient information, she needs to rely on her leaders. But here we have the problem above.

How to solve the problem? I don't know. One possibility is that even though there are wide disparities between moral systems, the particular judgments of these moral systems tend to agree on typical acts. Even though utilitarianism is wrong and Catholic ethics is right, the utilitarian and the Catholic moralist tend to agree about most particular cases that come up. Thus, for a typical action, a Catholic who hears the testimony of a well-informed utilitarian that an action is permissible can infer that the action is probably permissible. But war brings out differences between moral systems in a particularly vivid way. If bombing civilians in Hiroshima and Nagasaki is likely to get the emperor to surrender and save many lives, then the utilitarian is likely to say that the action is permissible while the Catholic will say it's mass murder.

It could, however, be that there are some heuristics that could be used by the soldier. If a war is against a clear aggressor, then perhaps the soldier should just trust the leadership to ensure that the other conditions (besides the justness of the cause) in the ius ad bellum conditions are met. If a lethal action does not result in disproportionate civilian deaths, then there is a good chance that the judgments of various moral systems will agree.

But what about cases where the heuristics don't apply? For instance, suppose that a Christian is ordered to drop a bomb on an area that appears to be primarily civilian, and no information is given. It could be that the leaders have discovered an important military installation in the area that needs to be destroyed, and that this is intelligence that cannot be disclosed to those who will carry out the bombing. But it could also be that the leaders want to terrorize the population into surrender or engage in retribution for enemy acts aimed at civilians. Given that there is a significant probability, even if it does not exceed 1/2, that the action is a case of mass murder rather than an act of just war, is it permissible to engage in the action? I don't know.

Perhaps knowledge of prevailing military ethical and legal doctrine can help in such cases. The Christian may know, for instance, that aiming at civilians is forbidden by that doctrine. In that case, as long as she has enough reason to think that the leadership actually obeys the doctrine, she might be justified in trusting in their judgment. This is, I suppose, an argument for militaries to make clear their ethical doctrines and the integrity of their officers. For if they don't, then there may be cases where too much disobedience of orders is called for.

I also don't know what probability of permissibility is needed for someone to permissibly engage in a killing.

I don't work in military ethics. So I really know very little about the above. It's just an ethical reflection occasioned by a dream...

Tuesday, September 22, 2015

Change and presentism

This post is an illustration of how widely intuitions can differ. It is widely felt by presentists that presentism is needed for there to be "real change", that the B-theory is a "static" theory. But I have the intuition that presentism endangers real change. Real change requires real difference between the past states and present states, and real difference requires the reality of the differing states. But if there are no past states, there are no real differences between past and present states, and hence no change.

Of course, a presentist can say that although a past state is unreal, there can nonetheless be a real difference between it and a real present state, just as there can be a real difference between the world of Harry Potter and our world, even though the world of Harry Potter isn't real. In a sense of "real difference" that's true, I agree. But not in the relevant sense. Change is a relation between realities.

The presentist can also insist that my line of thought is simply a case of the grounding problem for presentism, and can be resolved in a similar way. Supposing a window has just changed from being whole to being broken. Then while the past unbroken state doesn't exist, there does exist a present state of the window having been whole. I am happy to grant this present state to the presentist, but it doesn't affect the argument. For the relevant difference isn't between the window having been whole and the window being broken. For if no one broke the window, there would still have been a difference between the state of the window having been whole and the window being broken. There is always a difference between a state of something having been so and a state of its being so, but this difference isn't the difference that constitutes change.

(Incredible as it may seem to the presentist, when I try to imagine the presentist's world, I imagine an evanescent instantaneous world that therefore doesn't exist long enough for any change to take place. I am well aware that this world includes states like it was the case that the window was whole, but given presentism, these states seem to me to be modal in nature, and akin to the state of it is the case in the Harry Potter universe that magic works, and hence are not appropriate to make the world non-evanescent.)

Probably the presentist's best bet is simply to deny that real difference in my sense is needed for change. All that's needed is that something wasn't so and now is so. But if something's having been not so and its being so doesn't imply a real difference, it's not change, I feel.

Of course, the presentist feels very similarly about the B-theorist's typical at-at theory of change (change is a matter of something's being one way at one time and another way at another time): she feels that what is described isn't really change.

And this, finally, gives us the real upshot of this post. There are interesting disagreements where one side's account of a phenomenon just doesn't seem to be a description of the relevant concept to the other side--it seems to be a change of topic. These disagreements are particularly difficult to make progress in. Compare how the compatibilist's account of freedom just doesn't seem to be a description of freedom to the libertarian.

I don't have a general theory on how to make progress past such disagreement. I do have one thing I do in such cases: I try to find as many things connected with the concept in other areas of philosophy, like epistemology, philosophy of science, ethics and natural theology. And then I see which account does better more generally.

Wednesday, July 29, 2015

Enemies and opponents

When Jesus said to love one's neighbor, he was asked who was one's neighbor. When he said to love one's enemy, he wasn't asked who was one's enemy. But while it may be unhelpful to work hard to identify people as our enemies, it is helpful to work hard to identify people who are not our enemies. After all, (a) it's hard to love those we take to be our enemies, and we shouldn't try to engage in this difficult task when it can be avoided simply by realizing that these people are not our enemies, and (b) love comes in a variety of forms, and the way one loves one's enemy differs from the way one loves a friendly or neutral person--and to love a friendly or neutral person as an enemy is to do them an injustice. So we should work to avoid incorrect identifications of enemies.

In particular, I think it's important to avoid identifying mere opponents as enemies. I am not quite sure how to define an opponent, but roughly x is y's opponent when x tries to prevent what she knows to be a goal (final or subsidiary) of y and, roughly, x is y's enemy when x tries to prevent y's flourishing. (And it's part of the concept of flourishing that it is a goal of that of which it is the flourishing.) I suppose that all enemies are opponents. But not all opponents are enemies. After all, there are multiple ways one might try to prevent what one knows to be a goal of y without intending to take away from y's flourishing.

An opponent can even be a friend. If I play chess against you, we are opponents: we each intend to keep the other from checkmating us while each knowing that the other intends to checkmate us. But that which constitutes the opposition itself can be a sign of friendship. We may play chess precisely because it is mutually enjoyable to one another, and it is only mutually enjoyable (barring deceit) when each is trying to win. I think this is the ideal case with sports and other games: each is extending to the other the opportunity of engaging in this worthwhile mutual enterprise precisely in opposing the other. Playing a game should, thus, be a kind of bid for friendship. (I understand that sports and games sometimes don't work that way--that's sad.)

Of course not all cases of non-enmity opposition are ones where the opposition constitutes a bid for friendship. One can have opponents who are neutral with respect to one's good: they pursue a cause and see one as pursuing an incompatible one and hence the oppose one's pursuit. But hopefully one can presume that the opponent only disagrees about a subsidiary end. And if so, that's a basis for friendship.

So perhaps we can say: Love your enemies, but don't mistake mere opponents for enemies. Strive for friendship with mere opponents.

Tuesday, July 28, 2015

Optimism

I've been gradually realizing just how important it is to presume our ideological and political opponents to be motivated by pursuit of the good and true. Of course, in some cases the presumption is false, but likewise sometimes our co-partisans--and we ourselves--are motivated badly.

Here's a psychological advantage of making this presumption. If we lose out to our opponents (say, in the polis, in a department meeting, etc.), it's much less depressing when we see it as nonetheless a kind of victory for the true and the good--for we presume that the desire for the true and the good is what energized our opponents in their victory, what made them persevere, what made them win support.

It may seem not in keeping with a Christian view of this world as fallen to make this presumption. But at the same time, while this world is fallen, Christ's grace is widespread. And wherever people are moved by the true and the good, there is a likelihood that grace is at work. In fact, it is precisely the fact that the world is fallen that makes it likely that grace is at work where the pursuit of the true and the good energizes people.

None of this minimizes the importance of energetic disagreement when needed. If Fred and Sid disagree on which of two ropes to throw the drowning man, and Sid with great energy carries the day and throws the rotten rope to the drowning man, although Fred can see it as a kind of victory for the good in that Sid was being driven by the good, nonetheless the drowning man is likely to drown. So the presumption that our opponents are motivated rightly is fully compatible with resisting them respectfully to the best of our ability. Indeed, the very fact that Sid is pursuing the good is a reason for Fred resist Sid's mistaken choice of rope, so as to save Sid from an action that does not in fact achieve what Sid wants it to achieve.

Suppose it's granted that the presumption is helpful. But what justifies the presumption? Is it justified merely pragmatically? I don't think so. I think there is a general presumption that things are working rightly, a presumption that we should minimize the attribution of malfunction. (This general presumption may be what keeps us from scepticism, what makes it appropriate to trust in our senses and our fellows' testimony.) And it is a lesser defect to be wrong about the means than about the ends.

Wednesday, April 8, 2015

The equal weight view

Suppose I assign a credence p to some proposition and you assign a different credence q to it, even though we have the same evidence. We learn of each other's credences. What should we do? The Equal Weight View says that:

  1. I shouldn't give any extra weight to my own credence just because it's mine.
It is also a standard part of the EWV as typically discussed in the literature that:
  1. Each of us should revise the credence in the direction of the other's credence.
Thus if p>q, then I should revise my credence down and you should revise your credence up.

It's an odd fact that the name "Equal Weight View" only connects up with tenet (1). Further, the main intuition behind (1) is a thought that I shouldn't hold myself out as epistemically special, and that does not yield (2). What (1) yields is at most the claim that the method I should use for computing my final credence upon learning of the disagreement should be agnostic as to which of the two initial credences was mine and which was yours. But this is quite compatible with (2) being false. The symmetry condition (1) does nothing to force the final credence to be between the two credences. It could be higher than both credences, or it could be lower than both.

In fact, it's easy to come up with cases where this seems reasonable. A standard case in the literature is where different people calculate their share of the bill in a restaurant differently. Vary the case as follows. You and I are eating together, we agree on a 20% tip and an equal share, and we both see the bill clearly. I calculate my share to be $14.53 with credence p=0.96. You calculate your share to be $14.53 with credence q=0.94. We share our results and credences. Should I lower my confidence, say to 0.95? On the contrary, I should raise it! How unlikely it is, after all, that you should have come to the same conclusion as me if we both made a mistake! Thus we have (1) but not (2): we both revise upward.

There is a general pattern here. We have a proposition that has very low prior probability (in the splitting case the proposition that my share will be $14.53 surely has prior credence less than 0.01). We both get the same evidence, and on the basis of the evidence revise to a high credence. But neither of us is completely confident in the evaluation of the evidence. However, the fact that the other evaluated the evidence in a pretty similar direction overcomes the lack of confidence.

One might think that (2) is at least true in the case where the two credences are on opposite sides of 1/2. But even that may be wrong. Suppose that you and I are looking at the results of some scientific experiment and are calculating the value of some statistic v that is determined entirely by the data. You calculate v at 4.884764447, with credence 0.8, being moderately sure of yourself. But I am much less confident at my arithmetical abilities, and so I conclude that v is 4.884764447 with credence 0.4, We're now on opposite sides of 1/2. Nonetheless, I think your credence should go up: it would be too unlikely that my calculations would support the exact same value that yours did.

One might worry that in these cases, the calculations are unshared evidence, and hence we're not epistemic peers. If that's right, then the bill-splitting story standard in the literature is not a case of epistemic peers, either. And I think it's going to be hard to come up with a useful notion of epistemic peerhood that gives this sort of judgment.

I think what all this suggests is that we aren't going find some general formula for pooling our information in cases of disagreement as some people in the literature have tried (e.g., here). Rather, to pool our information, we need a model of how you and I came to our conclusions, a model of the kinds of errors that we were liable to commit on this path, and then we need to use the model to evaluate how to revise our credences.

Monday, March 16, 2015

Internal time, external time, probability and disagreement

Suppose that Jim lives a normal human life from the year 2000 to the year 2100. Without looking at a clock, what probability should Jim attach to the hypothesis that an even number of minutes has elapsed from the year 2000? Surely, probability 1/2.

Sally, on the other hand, lives a somewhat odd human life from the year 2000 to the year 2066. During every even-numbered minute of her life, her mental and physical functioning is accelerated by a factor of two. She can normally notice this, because the world around her, including the second hands of clocks, seems to slow down by a factor of two. She has won many races by taking advantage of this. An even-numbered external minute subjectively takes two minutes. Suppose that Sally is now in a room where there is nothing in motion other than herself, so she can't tell whether this was a sped-up minute or not. What probability should Sally attach to the hypothesis that an even number of minutes has elapsed from the year 2000?

If we set our probabilities by objective time, then the answer is 1/2, as in Jim's case. But this seems mistaken. If we're going to assign probabilities in cases like this—and that's not clear to me—then I think we should assign 2/3. After all, subjectively speaking, 2/3 of Sally's life occurs during the even-numbered minutes.

There are a number of ways of defending the 2/3 judgment. One way would be to consider relativity theory. We could mimic the Jim-Sally situation by exploiting the twin paradox (granted, the accelerations over a period of a minute would be deadly, so we'd have to suppose that Sally has superpowers), and in that case surely the probabilities that Sally should assign should be looked at from Sally's reference frame.

Another way to defend the judgment would be to imagine a third person, Frank, who lives all the time twice as fast as normal, but during odd-numbered minutes, he is frozen unconscious for half of each second. For Frank, an even numbered minute has 60 seconds' worth of being conscious and moving, while an odd numbered minute has 30 seconds' worth of it, and external reality stutters. If Frank is in a sensory deprivation chamber where he can't tell if external reality is stuttering, then it seems better for him to assign 2/3 to its being an even-numbered minute, since he's unconscious for half of each odd-numbered one. But Frank's case doesn't seem significantly different from Sally's. (Just imagine taking the limit as the unconscious/conscious intervals get shorter and shorter.)

A third way is to think about time travel. Suppose you're on what is subjectively a long trip in a time machine, a trip that's days internal time long. And now you're asked if it's an even-numbered minute by your internal time (the time shown by your wristwatch, but not by the big clock on the time machine console, which shows external years that flick by in internal minutes). It doesn't matter how the time machine moves relative to external time. Maybe it accelerates during every even-numbered minute. Surely this doesn't matter. It's your internal time that matters.

Alright, that's enough arguing for this. So Sally should assign 2/3. But here's a funny thing. Jim and Sally then disagree on how likely it is that it's an even-numbered minute, even though it seems we can set up the case so they have the same relevant evidence as to what time it. There is something paradoxical here.

A couple of responses come to mind:

  • They really have different evidence. In some way yet to be explained, their different prior life experiences are relevant evidence.
  • The thesis that there cannot be rational disagreement in the face of the same evidence is true when restricted to disagreement about objective matters. But what time it is now is not an objective matter. Thus, the A-theory of time is false.
  • There can be rational disagreement in the face of the same evidence.
  • There are no meaningful temporally self-locating probabilities.

Saturday, March 2, 2013

Infinity, probability and disagreement

Consider the following sequence of events:

  1. You roll a fair die and it rolls out of sight.
  2. An angel appears to you and informs you that you are one of a countable infinity of almost identical twins who independently rolled a fair die that rolled out of sight, and that similar angels are appearing to them all and telling them all the same thing. The twins all reason by the same principles and their past lives have been practically indistinguishable.
  3. The angel adds that infinitely many of the twins rolled six and infinitely many didn't.
  4. The angel then tells you that the angels have worked out a list of pairs of identifiers of you and your twins (you're not exactly alike), such that each twin who rolled six is paired with a twin who didn't roll six.
  5. The angel then informs you that each pair of paired twins will be transported into a room for themselves. And, poof!, it is so. You are sitting across from someone who looks very much like you, and you each know that you rolled six if and only if the other did not.
Let H be the event that you did not roll six. How does the probability of H evolve?

After step 1, presumably your probability of H is 5/6. But after step 5, it would be very odd if it was still 5/6. For if it is still 5/6 after step 5, then you and your twin know that exactly one of you rolled six, and each of you assigns 5/6 to the probability that it was the other person who rolled six. But you have the same evidence, and being almost identical twins, you have the same principles of judgment. So how could you disagree like this, each thinking the other was probably the one who rolled six?

Thus, it seems that after step 5, you should either assign 1/2 or assign no probability to the hypothesis that you didn't get six. And analogously for your twin.

But at which point does the change from 5/6 to 1/2-or-no-probability happen? Surely merely physically being in the same room with the person one was paired with shouldn't have made a difference once the list was prepared. So a change didn't happen in step 5.

And given 3, that such a list was prepared doesn't seem at all relevant. Infinitely many abstract pairings are possible given 3. So it doesn't seem that a change happened in step 4. (I am not sure about this supplementary argument: If it did happen after step 4, then you could imagine having preferences as to whether the angels should make such a list. For instance, suppose that you get a goodie if you rolled six. Then you should want the angels to make the list as it'll increase the probability of your having got six. But it's absurd that you increase your chances of getting the goodie through the list being made. A similar argument can be made about the preceding step: surely you have no reason to ask the angels to transport you! These supplementary arguments come from a similar argument Hud Hudson offered me in another infinite probability case.)

Maybe a change happened in step 3? But while you did gain genuine information in step 3, it was information that you already had almost certain knowledge of. By the law of large numbers, with probability 1, infinitely many of the rolls will be sixes and infinitely many won't. Simply learning something that has probability 1 shouldn't change the probability from 5/6 to 1/2-or-no-probability. Indeed, if it should make any difference, it should be an infinitesimal difference. If the change happens at step 3, Bayesian update is violated and diachronic Dutch books loom.

So it seems that the change had to happen all at once in step 2. But this has serious repercussions: it undercuts probabilistic reasoning if we live in multiverse with infinitely many near-duplicates. In particular, it shows that any scientific theory that posits such a multiverse is self-defeating, since scientific theories have a probabilistic basis.

I think the main alternative to this conclusion is to think that your probability is still 5/6 after step 5. That could have interesting repercussions for the disagreement literature.

Fun variant: All of the twins are future and past selves of yours (whose memory will be wiped after the experiment is over).

I'm grateful to Hud Hudson for a discussion in the course of which I came to this kind of an example (and some details are his).

Friday, May 25, 2012

Sexual ethics and moral epistemology

Let A be the set of all the sexual activities that our society takes to be wrong. For instance, A will include rape, voyeurism, adultery, bestiality, incest, polygamy, sex involving animals, sex betweeen adults and youth, etc. Let B be the set of all the sexual activities that our society takes to be permissible. For instance, B will include marital sex, contracepted marital sex, premarital sex, etc.

Now there is no plausible philosophical comprehensive unified explanatory theory of sexuality that rules all the actions in A wrong and all the actions in B permissible. There are comprehensive unified explanatory theories of sexuality that rule all the actions in A wrong but that also rule some of the actions in B wrong—the various Catholic theories do this—as well as ruling same-sex sexual activity wrong (which isn't in B at this point). There are also comprehensive unified explanatory theories of sexuality that rule all the actions in B permissible but that also rule a number of the actions in A permissible—for instance, theories that treat sex in a basically consequentialist way do this.

If this is right, then unless a completely new kind of theory can be found, it looks like we need to choose between three broad options:

  1. Take all the actions in A to be forbidden, and revise social opinion on a number of the actions in B.
  2. Take all the actions in B to be permissible, and revise social opinion on a number of the actions in A.
  3. Revise social opinion on some but not all of the actions in A and on some but not all of the actions in B.
Moreover, the best theories we have fall into (1) or (2) rather than (3). So given our current philosophical theories, our choice is probably between (1) and (2).

It is an interesting question how one would choose between (1) and (2) if one had to. I would, and do, choose (1) on the grounds that typically we can be more confident of prohibitions than of permissibles, moral progress being more a matter of discovery of new prohibitions (e.g., against slavery, against duelling, against most cases of the death penalty) than of discovery of new permissions.

It is somewhat easier to overlook a prohibition that is there than to imagine a prohibition that isn't there. An action is permissible if and only if there are no decisive moral reasons against the action. Generally speaking, we are more likely to err by not noticing something that is there than by "noticing" something that isn't there: overlooking is more common than hallucination. If this is true in the moral sphere, then we are more likely to overlook a decisive moral reason against an action than to "notice" a decisive moral reason that isn't there. After all, some kinds of moral reason require significant training in virtue for us to come to be cognizant of them.

If this is right, then prima facie we should prefer (1) to (2): we should be more confident of the prohibition of the actions in A than of the permission of the actions in B.

It would be interesting to study empirically how people's opinions fall on the question whether to trust intuitions of impermissibility or to trust intuitions of permissibility. It may be domain-specific. Thus a stereotypical conservative might take intuitions of impermissibility to be more reliable than intuitions of permissibility when it comes to matters of sex, but when it comes to matters of private property may take intuitions of permissibility to be more reliable, and a liberal might have the opposite view. But I think one should in general tend to go with intuitions of impermissibility.

Monday, February 27, 2012

Consolidating evidence

Here's something that surprised me when I noticed it, though it probably shouldn't have surprised me. The following can happen: My evidence favors p. Your evidence disfavors p. I know you are rational and competent. After talking with you, and consolidating evidence, I rationally increase my evidence for p.

Here's a case. Suppose we have a coin which is either biased 3:1 in favor of heads or biased 3:1 in favor of tails. We don't know which. I have observed a few coin tosses, and they included four tails and seven heads. My evidence supports the hypothesis that the coin is biased in favor of heads. You have observed a few coin tosses, and they were four tails and two heads. Your evidence supports the hypothesis that the coin is biased in favor of tails. Intuitively, I should lower my credence in the heads-bias hypothesis when I learn of your evidence.

But imagine further that the four tail tosses you observed are the same four tail tosses that I observed, but the two heads tosses you observed were not among the seven heads tosses I observed. Then consolidating our evidence, we get four tails and nine heads, which supports the heads-bias hypothesis.

This is humdrum: When we consolidate evidence, we need to watch out for double counting in either direction. The above case makes this striking, because when we eliminate double counting, we get confirmation in the opposite direction to what we would initially have expected.

There is a very practical moral of the above story. It is important not only to remember one's credence in the propositions one believes and cares about, but also the evidence that gave rise to this credence. For if one does not remember this evidence, it will be difficult to avoid double counting (or subtler failures of independence).

By the way, I think it is helpful to think of the disagreement literature as well as discussions about the nature of arguments and other social epistemology stuff as interpersonal data consolidation problems. Getting clear on what we are aiming at should help. You have data, I have data, we all have data. What we are aiming for are methods (algorithms, virtues, etc.) that help us consolidate data across persons to get a better picture of reality than we are likely to have individually.

Moreover, I think that morally speaking it is very important when engaging in argumentation to remember what we are doing: the telos of arguing is to consolidate data across persons in order to get to truth and understanding. This telos is social, as befits social animals. It is not the telos of an argument that I convince you of the argument's conclusion. Rather it is that I convince you of the truth or show you how truth hangs together. If instead of convincing you of the argument's conclusion I convince you by modus tollens of the falsity of one of the premises, and in fact the conclusion is false and so is that premise, then the point of arguing has perhaps been fulfilled. And if in a case where the conclusion is false my argument convinces you of that conclusion, then the argument is a failure qua argument.

Saturday, November 26, 2011

Spinoza on truth and falsity

In Actuality, Possibility, and Worlds, I attribute to Spinoza the view that no belief is false (though I think i also emphasize that nothing rides on the accuracy of the historical claim).  Rather, there are more or less confused beliefs, and in the extreme case there are empty words--words that do not signify any proposition.

I was led to the attribution by a focus on passages, especially in Part II of the Ethics and in the Treatise on the Emendation of the Intellect, that insist that every idea has an ideatum, that of which it is the idea, and hence corresponds to something real.  The claim that every idea has an ideatum is central to Spinoza's work.  It is a consequence of the central 2 Prop. 7 (which is the most fecund claim outside Part I) which claims that the order and connection of ideas is the order and connection of things, and it is also a consequence of the correspondence of modes between attributes.

These passages stand in some tension, however, to other passages where Spinoza expressly talks of false ideas, which are basically ideas that are too confused to be adequate or to be knowledge (the details won't matter for this post).

I think it is easy to reconcile the two sets of passages when we recognize that Spinoza has an idiosyncratic sense of "true" and "false".  In Spinoza's sense, an idea is true if the individual having the idea is right to have it, and it is false if the individual having it is not right to have it (cf. Campbell's "action-based" view of truth, but of course Campbell will not go along with Spinoza's internalism), where the individual is right to have the idea provided that she knows the content, or knows it infallibly.  And Spinoza, rationalist that he is, has an internalist view of knowledge, where knowledge is a matter of clarity and distinctness and a grasp of the explaining cause of the known idea.

Hence, Spinoza uses the words "true" and "false" in an internalist sense.  But we do not.  "True" as used by us expresses a property for which correspondence to reality is sufficient, and "false" expresses a property incompatible with such correspondence.  Since every belief has an idea (in Spinoza's terminology) as its content, and according to Spinoza every idea corresponds to reality, namely to its ideatum, it follows that in our sense of the word, Spinoza holds that every belief is true and no belief is false.

The ordinary notion of truth includes ingredients such as that correspondence to reality is sufficient for truth and that truth is a good that our intellect aims at.  Spinoza insists on the second part of this notion, and finds it in tension with the first (cf. this argument).  But the first part is, in fact, the central one, which is why philosophers can agree on what truth is while disagreeing about whether belief is aimed at truth, knowledge, understanding or some other good.

So, we can say that in Spinoza's sense of "true", it is his view that some but not all beliefs are true.  And in our sense of "true", it is his view that all beliefs are true.  The sentence "Some beliefs are false" as used by Spinoza would express a proposition that Spinoza is committed to, while the sentence "Some beliefs are false" as used by us would express a proposition that Spinoza is committed to the denial of.

This move of distinguishing our sense of a seemingly ordinary word like "true" from that of a philosopher X is a risky exegetical move in general. Van Inwagen has argued libertarians should not hold that compatibilists have a different sense of the phrase "free will".  But I think there are times when the move is perfectly justified.  When the gap between how X uses some word and how we use it is too great, then we may simply have to concede that X uses the word in a different sense.  This is particularly appropriate in the case of Spinoza whose views are far from common sense, whose philosophical practice depends on giving definitions, and who expressly insists that many disagreements are merely apparent and are simply due to using the same words in diverse senses.  (Actually, I also wonder if van Inwagen's case of free will isn't also a case where the phrase is used in diverse senses.  Even if so, we should avoid making this move too often.)

Addendum: This reading is in some tension with 1 Axiom 6 which says that a true idea must agree with its ideatum. While strictly speaking, this sets out only a necessary condition for a true idea, and hence does not conflict with what I say above, it is not unusual for Spinoza to phrase biconditionals as mere conditionals. If we read 1 Axiom 6 as a biconditional, then maybe we should make a further distinction, that between the truth of an idea and truth of a believing. We take the truth of a believing to be the same as the truth of the idea (or proposition) that is the object of the believing. But Spinoza distinguishes, and takes more to be required for the truth of a believing. We then disambiguate various passages. The problem with this is that on Spinoza's view, the believing is identical with the idea. But nonetheless maybe we can distinguish between the idea qua believing and the idea qua idea?

Tuesday, November 1, 2011

When should you adopt an expert's opinion over your own?

Consider two different methods for what to do with the opinion of someone more expert than yourself, on a matter where both you and the expert have an opinion.

Adopt: When the expert's opinion differs from yours, adopt the expert's opinion.

Caution: When the expert's opinion differs from yours, suspend judgment.

To model the situation, we need to assign some epistemic utilities.  The following are reasonable given that the disvalue of a false opinion is significantly worse than the value of a true belief, at least by a factor of ~2.346 in the case of confidence level 0.95, according to the hate-love ratio inequality.
  • Utility of having a true opinion: +1
  • Utility of having a false opinion: approximately -2.346
  • Utility of suspending judgment: 0
Given these epistemic utilities, we can do some quick calculations.  Suppose for simplicity that you're perfect at identifying the expert as an expert (surprisingly, replacing this by a 0.95 confidence level makes almost no difference).  Suppose the expert's level of expertise is 0.95, i.e., the expert has probability 0.95 of getting the right answer.  Then it turns out that Adopt is the better method when your level of expertise is below 0.89, while Caution is the better method when your level of expertise is above 0.89.  Approximately speaking, Adopt is the better method when you're more than about twice as likely to be wrong as the expert; otherwise, Caution is the better method.

In general, Adopt is the better method when your level of expertise is less than e/(D-e(D-1)), where e is the expert's level of expertise and D is the disutility of having a false opinion (which should be at least 2.346 for opinions at confidence level 0.95).  If your level of expertise is higher than that, Caution is the better method.

Here is a graph (from Wolfram Alpha) of the level of expertise you need to have (y-axis), versus the expert's level of expertise (x-axis), in order for adopting Caution rather than Adopt to be epistemic-decision-theory rational, where D=2.346.

Here is a further interesting result. If you set the utility of a false opinion to -1, which makes things more symmetric but leads to an improper scoring rule (with undesirable results like here), then it turns out that Adopt is better than Caution whenever your level of expertise is lower than the expert's. But for any utility of false opinion that's smaller than -1, it will be better to adopt Caution when the gap in level of expertise is sufficiently small.
If you want to play with this stuff, I have a Derive worksheet with this. But I suspect that there aren't many Derive users any more.