Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts

Thursday, March 31, 2022

Pascal's Wager for humans at death's door (i.e., all of us)

Much of the contemporary analytic discussion of Pascal’s Wager has focused on technical questions about how to express Pascal’s Wager formally in a decision-theoretic framework and what to do with it when that is done. And that’s interesting and important stuff. But a remark one of my undergrads made today has made think about the Wager more existentially (and hence in a way closer to Pascal, I guess). Suppose our worry about the Wager is that we’re giving up the certainty of a comfortable future secular life for a very unlikely future supernatural happiness, so that our pattern of risk averseness makes us reject the Wager. My student noted that in this case things will look different if we reflect on the fact that we are all facing the certainty of death. We are all doomed to face that hideous evil.

Let me expand on this thought. Suppose that I am certain to die in an hour. I can spend that hour repenting of my grave sins and going to Mass or I can play video games. Let’s suppose that the chance of Christianity being right is pretty small. But I am facing death. Things are desperate. If I don’t repent, I am pretty much guaranteed to lose my comfortable existence forever in an hour, whether by losing my existence forever if there is no God or by losing my comfort forever if Christianity is right. There is one desperate hope, and the cost of that is adding an hour’s loss of ultimately unimportant pleasures to the infinite loss I am already facing. It sure seems rational to go for it.

Now for most of us, death is several decades away. But what’s the difference between an hour and several decades in the face of eternity?

I think there are two existential ways of thinking that are behind this line of thought. First, that life is very short and death is at hand. Second, given our yearning for eternity, a life without eternal happiness is of little value, and so more or less earthly pleasure is of but small significance.

Not everyone thinks in these ways. But I think we should. We are all facing the hideous danger of eternally losing our happiness—if Christianity is right, because of hell, and if naturalism is right, because death is the end. That danger is at hand: we are all about to die in the blink of an eye. Desparate times call for desparate measures. So we should follow Pascal’s advice: pray, live the Christian life, etc.

The above may not compel if the probability of Christianity is too small. But I don’t think a reasonable person who examines the evidence will think it’s that small.

Friday, May 28, 2021

Behaviors that look like risk aversion but aren't

Technically, risk aversion is the phenomenon of valuing risky propositions at a value lower than their expected value. But I think in ordinary use of the phrase “risk averse”, we mean something different.

I consider myself a pretty risk-averse individual. Not infrequently, when I consider the possibility of myself or a child engaging in some activity, I spend a significant amount of time looking up the data on the risks of that activity, and comparing those to risks of other activities. And I spend time worrying about bad things that might happen. Thus, even though deaths from brain-eating amoebae are very rare (one per 1.2 years in Texas), if I were to swim with head submersion in a Texas lake or river, I know I would worry for the next week and a half (especially if I had anything approximating the symptoms of an amoeba infection, say a crick in the neck) in a way that would make the swim not have been worth it.

But while both my risk-investigation and risk-worrying make me “risk averse” in the ordinary sense, neither makes me clearly risk averse in the technical sense.

For risk-investigation behavior to count as risk averse in the technical sense, it would have to be that the expected value of the results of the investigation is lower than the value of the time I am willing to invest in the investigation. Whether this is true is a difficult to say. Sometimes at least the investigation concerns an activity that, if engaged in, would likely be engaged in repeatedly. In such a case, it might well be worth-while to devote a fair amount of time to investigating the risk-profile of the activity. If my child were to play soccer, they would likely be playing soccer for multiple years, each time undertaking the concussion risks from heading the ball (I would not allow a child of mine to play soccer unless I knew they were willing to consistently stand up to their coach and refuse to head the ball). The cost of spending an hour or two looking at sports-medicine research does not seem to be excessive as compared to the expected value of making a less-informed versus a more-informed decision. Moreover, the investigation itself is often interesting—I learn things that are interesting to know that offset much if not all the cost of the investigation. Finally, if the investigation leads to the conclusion that the risks of an activity are low, my future worries will be less.

Risk-worrying, on the other hand, does not even prima facie classify one as risk averse in the technical sense. Positive and negative emotional outcomes of a decision are simply a part of the utilities. Suppose that I am willing to pay $10 in order to avoid a situation where I have a 0.5 chance of losing $1000 and a 0.5 chance of gaining $1000, because if I don’t pay, I will worry that I will lose $1000. In that case, paying the $10 has an expected disvalue of $10, while not paying has an expected disvalue of $0 plus the disvalue of my negative emotional state. That emotional state might well be worth paying $10 to avoid, but if so, there is nothing risk averse in the technical sense about this decision.

So, some of the behaviors that intuitively would classify one as risk averse do not in fact show one to be risk averse in the technical sense. It would be interesting to see if there is a correlation between those behaviors, however, and risk aversion in the technical sense. There might be.

A related interesting empirical question is how to tell technical risk aversive behavior apart from simply taking worry-like states into account in one’s expected utility calculations. I think it can be done. For instance, if my main reason for avoiding risks is the disvalue of worrying, then I will be less willing to take risks that are resolved in the distant future than to take risks that are resolved in the near future. For with the distant risks, I have more time to worry, while the near risks will be resolved quickly, so the total amount of worry should be rather less. Interestingly, most people aren’t like that: they are more risk-avoidant in the case of immediately resolving risks than in the case of risks with temporally distant resolution. Hence, their risk avoidance is not based on simply rationally weighing the disvalue of worrying.

Wednesday, July 15, 2020

Catastrophic decisions

Kirk has come to a planet with two intelligent species in the universe, the Oligons and the Pollakons. There are a million Oligons and a trillion (i.e., million million) Pollakons. They are technologically unsophisticated, live equally happy lives on the same planet, but have no interaction with each other, and the universal translator is currently broken so Kirk can’t communicate with them either. A giant planetoid is about to graze the planet in a way that is certain to wipe out the Pollakons but leave the Oligons, given their different ecological niche, largely unaffected. Kirk can try to redirect the planetoid with his tractor beam. Spock’s accurate calculations give the following probabilities:

  • 1 in 1000 chance that the planetoid will now miss the planet and the Oligons and Pollakons will continue to live their happy lives;

  • 999 in 1000 chance that the planetoid will wipe out both the Oligons and the Pollakons.

If Kirk doesn’t redirect, expected utility is 106 happy lives (the Oligons). If Kirk does redirect, expected utility is (1/1000)(1012 + 106)=109 + 103 happy lives. So, expected utility clearly favors redirecting.

But redirecting just seems wrong. Kirk is nearly certain—99.9%—that redirecting will not help the Pollakons but will wipe out the Oligons.

Perhaps the reason intuition seems to favor not redirecting is that we have a moral bias in favor of non-interference. So let’s turn the story around. Kirk sees the planetoid coming towards the planet. Spock tells him that it has a 1/1000 chance that nothing bad will happen, and a 999/1000 chance that it will wipe out all life on the planet. But Spock also tells him that he can beam the Oligons—but not the Pollakons, who are made of a type of matter incapable of beaming—to the Enterprise. Spock, however, also tells Kirk that beaming the Oligons on board will require the Enterprise to come closer to the planet, which will gravitationally affect the planetoid’s path in such a way that the 1/1000 chance of nothing bad happening will disappear, and the Pollakons will now be certain, and not merely 999/1000 likely, to die.

Things are indeed a bit less clear to me now. I am inclined to think Kirk should rescue the Oligons (this may require Double Effect), but I am worried that I am irrationally neglecting small probabilities. Still, I am inclined to think Kirk should rescue. If that intuition is correct, then even in other-concerning decisions, and even when we have no relevant deontological worries, we should not go with expected utilities.

But now suppose that Kirk over his career will visit a million such planets. Then a policy of non-redirection in the original scenario or of rescue in the modified scenario would be disastrous by the Law of Large Numbers: those 1/1000 events would happen a number of times, and many, many lives will be lost. If we’re talking about long-term policies, then, it seems that Kirk should have a policy of going with expected utilities (barring deontological concerns). But for single-shot decisions, I think it’s different.

This line of thought suggests two things to me:

  • maximization of expected utilities in ordinary circumstances has something to do with limit laws like the Law of Large Numbers, and

  • we need a moral theory on which we can morally bind ourselves to a policy, in a way that lets the policy override genuine moral concerns that would be decisive absent the policy (cf. this post on promises).

Friday, March 1, 2019

Between subjective and objective obligation

I fear that a correct account of the moral life will require both objective and subjective obligations. That’s not too bad. But I’m also afraid that there may be a whole range of hybrid things that we will need to take into account.

Let’s start with clear examples of objective and subjective obligations. If Bob promised Alice to give her $10 but I misremember the promise and instead thinks he promised never to give her any more, then:

  1. Bob is objectively required to give Alice $10.

  2. Bob is subjectively required not to give Alice any money.

These cases come from a mistake about particular fact. There are also cases arising from mistakes about general facts. Helmut is a soldier in the Germany army in 1944 who knows the war is unjust but mistakenly believes that because he is a soldier, he is morally required to kill enemy combatants. Then:

  1. Helmut is objectively required to refrain from shooting Allied combatants.

  2. Helmut is subjectively required to kill Allied combatants.

But there are interesting cases of mistakes elsewhere in the reasoning that generate curious cases that aren’t neatly classified in the objective/subjective schema.

Consider moral principles about what one should subjectively do in cases of moral risk. For instance, suppose that Carl and his young daughter are stuck on a desert island for the next three months. The island is full of chickens. Carl believes it is 25% likely that chickens have the same rights as humans, and he needs to feed his daughter. His daughter has a mild allergy to the only other protein source on the island: her eyes will sting and her nose run for the next three months if she doesn’t live on chicken. Carl thus thinks that if chickens have the same rights as humans, he is forbidden from feeding chicken to his daughter; but if they don’t, then he is obligated to feed chicken to her.

Carl could now accept one of these two moral risk principles (obviously, these will be derivative from more general principles):

  1. An action that has a 75% probability of being required, and a 25% chance of being forbidden, should always be done.

  2. An action that has a 25% probability of being forbidden with a moral weight on par with the prohibition on multiple homicides and a 75% probability of being required with a moral weight on par with that of preventing one’s child’s mild allergic symptoms for three months should never be done.

Suppose that in fact chickens have very little in the way of rights. Then, probably:

  1. Carl is objectively required to feed chicken to his daughter.

Suppose further that Carl’s evidence leads him to be sure that (5) is true, and hence he concludes that he is required to feed chicken to his daughter. Then:

  1. Carl is subjectively required to feed chicken to his daughter.

This is a subjective requirement: it comes from what Carl thinks about the probabilities of rights, moral principles about what what to do in cases of risk, etc. It is independent of the objective obligation in (7), though in this example it agrees with it.

But suppose, as is very plausible, that (5) is false, and that (6) is the right moral principle here. (To see the point, suppose that he sees a large mammal in the woods that would suffice to feed his daughter for three months. If the chance that that mammal is a human being is 25%, that’s too high a risk to take.) Then Carl’s reasoning is mistaken. Instead, given his uncertainty:

  1. Carl is required to to refrain from killing chickens.

But what kind of an obligation is (9)? Both (8) and (9) are independent of the objective facts about the rights of chickens and depend on Carl’s beliefs, so it sounds like it’s subjective like (8). But (8) has some additional subjectivity in it: (8) is based on Carl’s mistaken belief about what his obligations are in cases of mortal risk, while (9) is based on what Carl’s obligations (but of what sort?) “really are” in those cases.

It seems that (9) is some sort of a hybrid objective-subjective obligation.

And the kinds of hybrid obligations can be multiplied. For we could ask about what we should do when we are not sure which principle of deciding in circumstances of moral risk we should adopt. And we could be right or we could be wrong about that.

We could try to deny (9), and say that all we have are (7) and (8). But consider this familiar line of reasoning: Both Bob and Helmut are mistaken about their obligations; they are not mistaken about their subjective obligations; so, there must be some other kinds of obligations they are mistaken about, namely objective ones. Similarly, Carl is mistaken about something. He isn’t mistaken about his subjective obligation to feed chicken. Moreover, his mistake does not rest in a deviation between subjective and objective obligation, as in Bob’s and Helmut’s case, because in fact objectively Carl should feed chicken to his daughter, as in fact (I assume for the sake of the argument) chickens have no rights. So just as we needed to suppose an objective obligation that Bob and Helmut got wrong, we need a hybrid objective-subjective one that Carl got wrong.

Here’s another way to see the problem. Bob thinks he is objectively obligated to give no money to Alice and Helmut thinks he is objectively obligated to kill enemy soldiers. But when Carl applies (5), what does he come to think? He doesn’t come to think that he is objectively required to feed chicken to his daughter. He already thought that this was 75% likely, and (5) does not affect that judgment at all. It seems that just as Bob and Helmut have a belief about something other than mere subjective obligation, Carl does as well, but in his case that’s not objective obligation. So it seems Carl has to be judging, and doing so incorrectly, about some sort of a hybrid obligation.

This makes me really, really want an account of obligation that doesn’t involve two different kinds. But I don’t know a really good one.

Tuesday, May 16, 2017

Pascal's Wager and the bird-in-the-hand principle

My thinking about the St Petersburg Paradox has forced me to reject this Archimedean axiom (not the one in the famous representation theorem):

  1. For any finite utility U and non-zero probability ϵ > 0, there is a finite utility V such that a gamble that offers a probability ϵ of getting V is always better than a certainty of U.
Roughly speaking, one must reject (1) on pain of being subject to a two-player Dutch Book. But rejecting (1) is equivalent to affirming:
  1. There is a finite utility U and a non-zero probability ϵ > 0, such that no gamble that offers a probability ϵ of getting some finite benefit is better than certainty of U.
With some plausible additional assumptions (namely, transitivity, and that the same non-zero probability of a greater good is better than a non-zero probability of a lesser one), we get this bird-in-the-hand principle:
  1. There is a finite utility U and a non-zero probability ϵ > 0, such that for all finite utilities V, the certainty of U is better than a probability ϵ of V.
Now, Pascal's Wager, as it is frequently presented, says that:
  1. Any finite price is worth paying for any non-zero probability of any infinite payoff.
By itself, this doesn't directly violate the bird-in-the-hand principle, since in (3), I said that V was finite. But (4) is implausible given (3). Consider, for instance, this argument. By (3), there is a finite utility U and a non-zero probability ϵ > 0 such that U is better than an ϵ chance at N days of bliss for every finite N. A plausible limiting case argument suggests that then U is at least as good as an ϵ chance at an infinite number of days of bliss, contrary to (4)--moreover, then U+1 will be better than an ϵ chance at an infinite number of days of bliss. Furthermore, in light of the fact that standard representation theorem approaches to maximizing expected utility don't apply to infinite payoffs, the natural way to argue for (4) is to work with large finite payoffs and apply domination (Pascal hints at that: he gives the example of a gamble where you can gain "three lifetimes" and says that eternal life is better)--but along the way one will violate the bird-in-the-hand principle.

This doesn't, however, destroy Pascal's Wager. But it does render the situation more messy. If the probability ϵ of the truth of Christianity is too small relative to the utility U lost by becoming a Christian, then the bird-in-the-hand principle will prohibit the Pascalian gamble. But maybe one can argue that little if anything is lost by becoming a Christian even if Christianity is false--the Christian life has great internal rewards--and the evidence for Christianity makes the probability of the truth of Christianity not be so small that the bird-in-the-hand principle would apply. However, people's judgments as to what ϵ and U satisfy (2) will differ.

Pleasantly, too, the bird-in-the-hand principle gives an out from Pascal's Mugger.

Friday, May 12, 2017

More on St Petersburg

I’ve been thinking about what assumptions generate the St Petersburg paradox. As stated, the paradox depends on the assumption that we should maximize expected utility, an assumption that will be rejected by those who think risk aversion is rational.

But one can run the St Petersburg paradox without expected utility maximization, and in a context compatible with risk aversion. Suppose finite utilities can be represented by finite real numbers. Assume also:

  1. Domination: If a betting portfolio B is guaranteed to produce at least as good an outcome as A no matter what, then B is at least as good as A.

  2. Archimedeanism: For any finite utility U and non-zero probability ϵ > 0, there is a finite utility V such that a gamble that offers a probability ϵ of getting V is always better than a certainty of U.

  3. Transitivity: If C is better than B and B is at least as good as A, then C is better than A.

(Note: For theistic reasons, one might worry about Construction when the Vi are very negative, but we can restrict Construction to positive finite utilities if we add the assumption in Archimedeanism that V can always be taken to be positive.)

For, given these assumptions, one can generate a gambling scenario that has only finite utilities but that is better than the certainty of any finite utility. Proceed as follows. For each positive integer n, let Vn be any finite utility such that probability 1/2n of Vn is better than certainty of n units of utility (this uses Archimedeanism; the apparent use of the Axiom of Choice can be eliminated by using the other axioms, I think) and Vn ≥ Vn − 1 if n > 1. Toss a fair coin until you get heads. Let your payoff be Vn if it took n tosses to get to heads.

Fix any finite utility U. Let n be a positive integer such that U < n. Then the gambling scenario offers a probability of 1/2n of getting at least Vn, so by Domination, Transitivity and the choice of Vn, it is better than U.

And the paradoxes in this post apply in this case, too.

If we have expected utility maximization, we can take Vn = 2n and get the classic St Petersburg paradox.

Given the plausibility of Domination and Transitivity, and the paradoxes here, it looks like the thing to reject is Archimedeanism. And that rejection requires holding that there is a probability ϵ so small and finite utility U so large that no finite benefit with that probability can outweigh U.

Friday, October 9, 2015

Prudential rationality

Prudential rationality is about what an agent should do in the light of what is good or bad for the agent. Prudential or self-interested rationality is a major philosophical topic and considered fairly philosophically fundamental. Why? There are many (infinitely many) other categories of goods and bads, and for each category it makes sense to ask what one should do in the light of that category. For instance, family rationality is concerned with what an agent should do in the light of what is good or bad for people in the agent's family; leftward rationality is concerned with the good and bad for the people located to the agent's left; nearest-neighbor rationality with the good or bad for the person other than the agent whose center of mass is closest to the agent's center of mass; green-eye rationality with the good or bad for green-eyed people; and descendant rationality with the good or bad for one's descendants. Why should prudential rationality get singled out as a topic?

It's true that in terms of agent-relative categories, the agent is particularly natural. But the agent's descendants is also a quite natural agent-relative category.

This question reminds me of this thought (inspired by Nancy Cartwright's work). Physicists study things that don't exist. They study the motion of objects in isolated gravitational systems, in isolated quantum systems, and so on. But there are no isolated systems, and any system includes a number of other forces. It is, however, sometimes useful to study the influences that particular forces would have on their own.

However, in the end what we want to predict in physics is how real things move. And they move in the light of all the forces. And likewise in action theory we want to figure out how real people should act. And they should act in the light of all the goods and bads. We get useful insight into how and why real things move by studying how they would move if they were isolated or if only one force was relevant. We likewise get useful insight into how and why real people should act by studying what actions would be appropriate if they were isolated or if only one set of considerations were relevant. As a result we have people who study prudential rationality and people who study epistemic rationality.

It is nonetheless crucial not to forget that the study of how one should act in the light of a subset of the goods and bads is not a study of how one should act, but only a study of how one would need to act if that subset were all that's relevant, just as the study of gravitational systems is not a study of how things move, but only of how things would move if gravity were all that's relevant.

That said, I am not sure prudential rationality is actually that useful to study. Its main value is that it restricts the goods and bads to one person, thereby avoiding the difficult problem of balancing goods and bads between persons (and maybe even non-persons). But that value can be had by studying not prudential or self-interested rationality, but one-recipient rationality, where one studies how one should act in the light of the goods and bads to a single recipient, whether that recipient is or is not the agent.

It might seem innocent to make the simplifying assumption that the single recipient is the agent. But I think that doing this has a tendency to hide important questions that become clearer when we do not make this assumption. For instance, when one studies risk-averseness, we lose sight of the crucially important question of whose risk-averseness is relevant: the agent's or the recipient's? Presumably both, but they need to interact in a subtle and important way. To study risk-averseness in the special case where the recipient is the agent risks losing sight of something crucial in the phenomenon, just as one loses a lot of structure when instead of studying a mathematical function of two variables, say, f(x,y)=sin x cos y, one studies merely how that function behaves in the special case where the variables are equal. Although one does simplify by not studying the interaction between the agent's and the recipient's risk-averseness, one does so at the cost of confusing the two and not knowing which aspect of one's results is due to the risk-averseness of the person qua agent and which part is due to the risk-averseness of the person qua recipient.

Similarly, when one is interested--as decision theorists centrally are--in decision-making under conditions of uncertainty, it is important to distinguish between the relevance of the uncertainty of the person qua agent and the uncertainty of the person qua recipient. When we do that, we might discover a structure that was hidden in the special case where the agent and recipient are the same. For instance, we may discover that with respect to means the agent's uncertainty is much more important than the recipient's, but with respect to ends the recipient's uncertainty is very important.

To go back to the gravitational analogy, it's very useful to consider the gravitational interaction between particles x and y. But we lose an enormous amount of structure when we restrict our attention to the case where x=y. We would do better to make the simplifying assumption that we're considering two different particles, and then think of the one-particle case as a limiting case. Likewise for rationality. While we do need to study simplified cases, we need to choose the cases in a way that does not lose too much structure.

Of course, if we have an Aristotelian theory on which all one's actions are fundamentally aimed at one's own good, then what I say above will be unhelpful. For in that case, prudential rationality does capture the central structure of rationality. But such theories are simply false.

Wednesday, November 13, 2013

Open theism and risk

We have many well-justified beliefs about how people will freely act. For instance, I have a well-justified belief that at most a minority of my readers will eat a whole unsweetened lemon today. Yet most of you can. (And maybe one or two of you will.) Notice that a fair amount of our historical knowledge is based on closely analogous judgments. When we engage in historical analysis we base ourselves on knowledge of how people freely act individually or en masse. We know that various historical events occurred because of what we know about how people who report historical events behave--given what we know about human character, we know the kinds of things they are likely to tell the truth about, the kinds of things they are likely to lie about and the kinds of things they are likely to be mistaken about. But it would be strange to claim knowledge about past human behavior and disclaim knowledge about future human behavior when exactly similar probabilistic regularities give us both.

But if open theism is true, then God cannot form such beliefs about the future. For open theists agree that God is essentially infallible in his beliefs: it is impossible for God to hold a false belief. But if God were in a habit of forming beliefs about how people will in fact act, then in at least some possible worlds, and probably in this one as well, God would have false beliefs—it may be 99.99% certain that I won't eat a whole unsweetened lemon today, but that just means that there is a 0.01% chance that I will.

So the open theist, in order to hold on to divine infallibility, must say that God keeps from having beliefs on evidence that does not guarantee truth. Why would God keep himself from having such beliefs, given that they seem so reasonable? Presumably to avoid the risk of being wrong about something.

But now notice that open theism has God take really great risks. According to open theism, in creating the world, God took the risk of all sorts of horrendous evils. The open theist God is not at all averse to taking great risks about creation. So why would he be so averse to taking risks with his beliefs?

The open theists who think that there are no facts about the future have an answer here. They will say that my belief that at most a minority of my readers will eat a whole unsweetened lemon today is certainly not true, since the fact alleged does not obtain, and hence that I shouldn't have this belief. Instead, I should have some probabilistic belief, like that present conditions have a strong tendency to result in the nonconsumption of these lemons. My argument here is not addressed to these revisionists.

Monday, November 11, 2013

Risk compensation

Rational decision theory predicts that decreasing the riskiness of a activity will tend to increase the prevalence and degree of risky activity. As paragliding is made safer, one expects more people to be engaged in paragliding and those who are engaged in it to do it more intensely. But of course increasing the prevalence and degree of behavior will tend to increase the prevalence of occurrences of the negative outcome that one was decreasing. This is the phenomenon of risk compensation: decreasing the risk of a negative outcome of an activity is to some degree—maybe sometimes completely—compensated for by an increase in the prevalence and degree of engagement in the risky activity. For instance, taxi drivers who have antilock brakes tend to follow the vehicle in front of them more closely.

Suppose that in some case the risk compensation to some safety measure is complete: i.e., the prevalence of the relevant negative outcome (say, crashes or fatalities) is unchanged, due to the compensating increase in the prevalence and degree of the risky behavior. One might think that at this point the safety measure was pointless.

Whether this conclusion is correct depends, however, on what values the risky behavior itself has when one brackets the risk in question. If the risky behavior has positive value when one brackets the risk, the safety measure does in fact achieve something good: an increase in prevalence and degree of valuable but risky behavior with no increase in negative outcome. Paragliding is (I assume) a pleasant way to enjoy the beauty of the earth and to stretch the limits of human ability. An increase in the prevalence of paragliding without an increase of negative outcomes is all to the good.

When the behavior is completely neutral, then the safety measure, however, is simply a waste given complete risk compensation.

Finally, if the risky behavior has negative value even when one brackets the particular risky outcome, then in the case where the risk compensation is complete, the safety measure is counterproductive. It does not decrease the negative outcome it is aimed at, but by increasing the prevalence of an otherwise unfortunate activity it on balance has a negative outcome. For instance, suppose bullfighting is an instance of immoral cruelty to animals. Then apart from the risks to the bullfighter, the activity has negative value: it harms the animal and damages the soul of the person. If a safety measure for prevention of goring then were compensated for by an increase in prevalence, the safety measure would have on balance a negative outcome: there would be no decrease in gorings but there would be an increase in immoral and harmful activity.

Moreover, in cases where the risky behavior has independently negative value, a safety measure can have on balance negative effect even when the risk compensation is quite modest. Suppose that (I am making up the numbers) gorings occur in 10% of bullfights and cruelty to bulls in 80%. Suppose, further, that cruelty to bulls is at least as bad as goring (since it not only harms the bull but more importantly it seriously damages the soul of the cruel person). Then a safety measure that decreases the probability of goring by a half but results in a modest 10% increase in the prevalence of bullfighting will have on balance negative effect. For suppose that previously there were 1000 bullfights, and hence 100 gorings and 800 instances of cruelty. Now there will be 1100 bullfights, and hence (at the new rate) 55 gorings and 880 instances of cruelty. We have prevented 45 gorings at the cost of 80 instances of cruelty, and that is not worth it.

Much of the public discussion of risk compensation and safety measures centers on sex, and particularly premarital sex. We should typically expect some behavioral change in the direction of risk compensation given a safety measure. If one thinks premarital sex to be itself typically valuable, then even given total risk compensation one will think the safety measure to be worthwhile. If one thinks premarital sex to be value-neutral, then as long as the risk compensation is incomplete (i.e., the decrease in the risks due to the safety measure is not balanced by the increase in prevalence), one will think the safety measure to be worthwhile (at least as long as the costs of the safety measure are not disproportionate). But if one thinks premarital sex to have negative moral value, then one may well think a safety measure to be counterproductive even if the risk compensation is incomplete—as in my imaginary bullfighting cases.

I think public discussion of things like condoms and sex education could be significantly improved if participants in the discussion were all open and clear about the fact that we should expect some degree of risk compensation—that's just decision theory[note 1]—and were mutually clear on what value they ascribe to the sexual activity itself, independently of the risks in question.

In these kinds of cases, it sounds very attractive to say: "Let's focus on what we all agree on. Being gored, getting AIDS and teen pregnancy are worth preventing." But a public policy focused successful at improving the outcomes we have consensus on can still be on balance harmful, as my (made up) bullfighting example shows.

Thursday, October 4, 2012

A dialog on rhetoric, autonomy and original sin

L: Rhetorical persuasion does not track truth in the way that good arguments do. The best way for us to collectively come to truth is well-reasoned arguments presented in a dry and rigorous way, avoiding rhetorical flourishes. Rhetoric makes weaker arguments appear stronger than they are and a practice of giving rhetorically powerful arguments can make stronger arguments appear weaker.

R: Rhetoric appeals to emotions and emotions are truth-tracking, albeit their reliability, except in the really virtuous individual, may not be high. So I don't believe that rhetorical persuasion does not track truth. But I will grant it for our conversation, L. Still, you're forgetting something crucial. People have an irrational bias against carefully listening to arguments that question their own basic assumptions. Rhetoric and other forms of indirect argumentation sneak in under the radar of one's biases and make it possible to convince people of truths that otherwise they would be immune to.

L: Let's have the conversation about the emotions on another day. I suspect that even if emotions are truth-tracking, in practice they are sufficiently unreliable except in the very virtuous, and it is not the very virtuous that you are talking of convincing. I find your argument ethically objectionable. You are placing yourself intellectually over other people, taking them to have stupid biases, sneaking under their guard and riding roughshod over their autonomy.

R: That was rhetoric, not just argument!

L: Mea culpa. But you see the argumentative point, no?

R: I do, and I agree it is a real worry. But given that there is no other way of persuading not very rational humans, what else can we do?

L: But there are other ways of persuading them. We could use threats or brainwashing.

R: But that would be wrong!

L: This is precisely the point at issue. Threats or brainwashing would violate autonomy. You seemed to grant that rhetorical argument does so as well. So it should be wrong to convince by rhetorical argument just as much as by threats or brainwashing.

R: But it's good for someone to be persuaded of the truth when they have biases that keep them from truth.

L: I don't dispute that. But aren't you then just paternalistically saying that it's alright to violate people's autonomy for their own good?

R: I guess so. Maybe autonomy isn't an absolute value, always to be respected.

L: So what objection do you have to convincing people of the truth by threat or brainwashing?

R: Such convincing—granting for the sake of argument that it produces real belief—would violate autonomy too greatly. I am not saying that every encroachment on autonomy is justified, but only that the mild encroachment involved in couching one's good arguments in a rhetorically effective form is.

L: I could pursue the question whether you shouldn't by the same token say that for a great enough good you can encroach on autonomy greatly. But let me try a different line of thought. Wouldn't you agree that it would be a unfortunate thing to use means other than the strength of argument to convince someone of a falsehood?

R: Yes, though only because it is unfortunate to be convinced of a falsehood. In other words, it is no more unfortunate than being convinced of a falsehood by means of strong but ultimately unsound or misleading arguments.

L: I'll grant you that. But being convinced by means of argument tracks truth, though imperfectly. Being convinced rhetorically does not.

R: It does when I am convincing someone of a truth!

L: Do you always try to convince people of truths?

R: I see what you mean. I do always try to convince people of what I at the time take to be the truth—except in cases where I am straightforwardly and perhaps wrongfully deceitful, sinner that I am—but I have in the past been wrong, and there have been some times when what I tried to convince others of has been false.

L: Don't you think that some of the things you are now trying to convince others of will fall in the same boat, though of course you can't point out which they are, on pain of self-contradiction?

R: Yes. So?

L: Well, then, when you strive to convince someone by rhetorical means of a falsehood, you are more of a spreader of error than when you try to do so by means of dry arguments.

R: Because dry arguments are less effective?

L: No, because reasoning with dry arguments is more truth conducive. Thus, when you try to convince someone of a falsehood by means of a dry argument, it is more likely that you will fail for truth-related reasons—that they will see the falsehood of one of your premises or the invalidity of one of your inferences. Thus, unsound arguments will be more likely to fail to convince than sound arguments will be. But rhetoric can as easily convince of falsehood as of truth.

R: I know many people who will dispute the truth conduciveness of dry argument, but I am not one of them—I think our practices cannot be explained except by thinking there is such conduciveness there. But I could also say that rhetorical argument is truth conducive in a similar way. The truth when attractively shown forth is more appealing than a rhetorically dressed up falsehood.

L: Maybe. But we had agreed to take for granted in our discussion that rhetorical persuasion is not truth tracking.

R: Sorry. It's easy to forget yourself when you've granted a falsehood for the sake of discussion. Where were we?

L: I said that reasoning with dry arguments is more truth conducive, and hence runs less of a risk of persuading people of error.

R: Is it always wrong to take risks?

L: No. But the social practice of rhetorical presentation of arguments—or, worse, of rhetorical non-argumentative persuasion—is less likely to lead to society figuring out the truth on controversial questions.

R: Are you saying that we should engage in those intellectual practices which, when practiced by all, are more likely to lead to truth?

L: I am not sure I want to commit myself to this in all cases, but in this one, yes.

R: I actually think one can question your claim about social doxastic utility. Rhetorical persuasion leads to a greater number of changes of mind. A society that engages in practices of rhetorical persuasion is likely to have more in the way of individual belief change, as dry arguments do not in fact convince. But a society with more individual belief change might actually be more effective at coming to the truth, since embodying different points of view in the same person at different times can lead to a better understanding of the positions and ultimately a better rational decision between them. We could probably come up with some interesting computation social epistemology models here.

L: You really think this?

R: No. But it seems no less likely to be correct than your claim that dry argument is a better social practice truth-wise.

L: Still, maybe there is a wager to be run here. Should you engage in persuasive practices here that (a) by your own admission negatively impact the autonomy of your interlocutors and (b) are no more likely than not to lead to a better social epistemic state?

R: So we're back to autonomy?

L: Yes.

R: But as I said I see autonomy not as an absolute value. If I see that a person is seriously harming herself through her false beliefs, do I not have a responsibility to help her out—the Golden Rule and all that!—even if I need to get around her irrational defenses by rhetorical means?

L: But how do you know that you're not the irrational one, about to infect an unwary interlocutor?

R: Are you afraid of being infected by me?

L: I am not unwary. Seriously, aren't you taking a big risk in using rhetorical means of persuasion, in that such means make you potentially responsible for convincing someone, in a way that side-steps some of her autonomy, of a falsehood? If by argument you persuade someone, then she at least has more of a responsibility here. But if you change someone's mind by rhetoric—much as (but to as smaller degree) when by threat or brainwashing—the responsibility for the error rests on you.

R: That is a scary prospect.

L: Indeed.

R: But sometimes one must do what is scary. Sometimes love of neighbor requires one to take on responsibilities, to take risks, to help one's neighbor out of an intellectual pit. Taking the risks can be rational and praiseworthy. And sometimes one can be rationally certain, too.

L: I am not sure about the certainty thing. But it seems that your position is now limited. That it is permissible to use rhetorical persuasion when sufficiently important goods of one's neighbor are at stake that the risk of error is small relative to these.

R: That may be right. Thus, it may be right to teach virtue or the Gospel by means that include rhetorical aspects, but it might be problematic to rhetorically propagate those aspects of science or philosophy that are not appropriately connected to virtue or the Gospel. Though even there I am not sure. For those things that aren't connected to virtue or the Gospel don't matter much, and error about them is not a great harm, so the risks may still be doable. But you have inclined me to think that one may need a special reason to engage in rhetoric.

L: Conditionally, of course, on our assumption that rhetoric is not truth-conducive in itself.

R: Ah, yes, I almost forgot that.

Friday, December 31, 2010

A stupid way to invest

Here's a fun little puzzle for introducing some issues in decision theory. You want to invest a sum of money that is very large for you (maybe it represents all your present savings, and you are unlikely to save that amount again), but not large enough to perceptibly affect the market. A reliable financial advisor suggests you diversifiedly invest in n different stocks, s1,...,sn, putting xi dollars in si. You think to yourself: "That's a lot of trouble. Here is a simpler solution that has the same expected monetary value, and is less work. I will choose a random number j between 1 and n, such that the probability of choosing j=i is proportional to xi (i.e., P(j=i)=xi/(x1+...+xn)). Then I will put all my money in sj." It's easy to check that this method does have the same expected value as the diversified strategy. But it's obvious that this is a stupid way to invest. The puzzle is: Why is this stupid?

Well, one standard answer is this. This is stupid because utility is not proportional to dollar amount. If the sum of money is large for you, then the disutility of losing everything is greater than the utility of doubling your investment. If that doesn't satisfy, then the second standard answer is that this is an argument for why we ought to be risk averse.

Maybe these answers are good. I don't have an argument that they're not. But there is another thought that from time to time I wonder about. We're talking of what is for you a very large sum of money. Now, the justification for expected-utility maximization is that in the long run it pays. But here we are dealing with what is most likely a one-time decision. So maybe the fact that in the long run it pays to use the simpler randomized investment strategy is irrelevant. If you expected to make such investments often, the simpler strategy would, indeed, be the better one—and would eventually result in a diversified portfolio. But for a one-time decision, things may be quite different. If so, this is interesting—it endangers Pascal's Wager, for instance.

Wednesday, December 15, 2010

Risk reduction policies

The following policy pattern is common.  There is a risky behavior which a portion of a target population engages in.  There is no consensus on the benefits of the behavior to the agent, but there is a consensus on one or more risks to the agent.  Two examples:
  • Teen sex: Non-marital teen sex, where the risks are non-marital teen pregnancy and STIs.
  • Driving: Transportation in motor vehicles that are not mass transit, where the risks are death and serious injury.
In both cases, some of us think that the activity is beneficial when one brackets the risks, while others think the activity is harmful.  But we all agree about the harmfulness of non-marital teen pregnancy, STIs, death and serious injury.

In such cases, it is common for a "risk-reduction" policy to be promoted.  What I shall (stipulatively) mean by that is a policy whose primary aim is to decrease the risk of the behavior to the agent rather than to decrease the incidence of the behavior.  For instance: condoms and sexual education not centered on the promotion of abstinence in the case of teen sex; seat-belts and anti-lock brakes in the case of driving.  I shall assume that it is uncontroversial that the policy does render the behavior less risky.  

One might initially think--and some people indeed do think this--that it is obvious, a no-brainer, that decreasing the risks of the behavior brings benefits.  There are risk-reduction policies that nobody opposes.  For instance, nobody opposes the development of safer brakes for cars.  But other risk-reduction policies, such as the promotion of condoms to teens, are opposed.  And sometimes they make the argument that the risk-reduction policy will promote the behavior in question, and hence it is not clear that the total social risk will decrease.  It is not uncommon for the supporters of the risk-reduction policy to think the policy's opponents "just don't care", are stupid, and/or are motivated by something other than concerns about the uncontroversial social risk (and indeed the last point is often the case).  For instance, when conservatives worry that the availability of contraception might increase teen pregnancy rates, they are thought to be crazy or dishonest.

I will show, however, that sometimes it makes perfect sense to oppose a risk-reduction policy on uncontroversial social-risk principles.  There are, in fact, cases where decreasing the risk involved in the behavior increases total social risk by increasing the incidence.  But there are also cases where decreasing the risk involved in the behavior decreases total social risk.  

On some rough but plausible assumptions, together with the assumption that the target population is decision-theoretic rational and knows the risks, there is a fairly simple rule.  In cases where a majority of the target population is currently engaging in the behavior, risk reduction policies do reduce total social risk.  But in cases where only a minority of the target population is currently engaging in the behavior, moderate reductions in the individual risk of the behavior increase total social risk, though of course great reductions in the individual risk of the behavior decrease total social risk (the limiting case is where one reduces the risk to zero).

Here is how we can see this.  Let r be the individual uncontroversial risk of the behavior.  Basically, r=ph, where p is the probability of the harm and h is the disutility of the harm (or a sum over several harms).  Then the total social risk, where one calculates only the harms to the agents themselves, is T(r)=Nr, where N is the number of agents engaging in the harmful behavior.  A risk reduction policy then decreases r, either by decreasing the probability p or by decreasing the harm h or both.  One might initially think that decreasing r will obviously decrease T(r), since T(r) is proportional to r.  But the problem is that N is also dependent on r: N=N(r).  Moreover, assuming the target population is decision-theoretic rational and assuming that the riskiness is not itself counted as a benefit (both assumptions are in general approximations), N(r) decreases as r increases, since fewer people will judge the behavior worthwhile the more risky it is.  Thus, T(r) is the product of two factors, N(r) and r, where the first factor decreases as r increases and the second factor increases as r increases.  

We can also say something about two boundary cases.  If r=0, then T(r)=0.  So reducing individual risk to zero is always a benefit with respect to total social risk.  Of course any given risk-reduction policy may also have some moral repercussions--but I am bracketing such considerations for the purposes if this analysis.  But here is another point.  Since presumably the perceived benefits of the risky behavior are finite, if we increases r to infinity, eventually the behavior will be so risky that it won't be worth it for anybody, and so N(r) will be zero for large r and hence T(r) will be zero for large r.  So, the total social risk is a function that is always non-negative (r and N(r) are always non-negative), and is zero at both ends.  Since for some values of r, T(r)>0, it follows that there must be ranges of values of r where T(r) decreases as r decreases and risk-reduction policies work, and other ranges of values of r where T(r) increases as r decreases and risk-reduction policies are counterproductive.

To say anything more precise, we need a model of the target population.  Here is my model.  The members of the population targeted by the proposed policy agree on the risks, but assign different expected benefits to the behavior, and these expected benefits do not depend on the risk.  Let b be the expected benefit that a particular member of the target population assigns to the activity.  We may suppose that b has a normal distribution with standard devision s around some mean B.  Then a particular agent engages in the behavior if and only if her value of b exceeds r (I am neglecting the boundary case where b=r, since given a normal distribution of b, this has zero probability).  Thus, N(r) equals the numbers of agents in the population whose values of b exceed r.  Since the values of b are normally distributed with pre-set mean and standard deviation, we can actually calculate N(r).  It equals (N/2)erfc((r-B)/s), where erfc is the complementary error function, and N is the population size.  Thus, N(r)=(rN/2)erfc((r-B)/s).

Let's plug in some numbers and do a graph.  Suppose that the individual expected benefit assigned to the behavior has a mean of 1 and a standard deviation of 1.  In this case, 84% of the target population thinks that when one brackets the uncontroversial risk, the behavior has a benefit, while 16% think that even apart from the risk, the behavior is not worthwhile.  I expect this is not such a bad model of teen attitudes towards sex in a fairly secular society.  Then let's graph T(r) (on the y-axis it's normalized by dividing by the total population count N--so it's the per capita risk in the target population) versus r (on the x-axis). (You can click on the graph to tweak the formula if interested.)

We can see some things from the graph.  Recall that the average benefit assigned to the activity is 1.  Thus, when the individual risk is 1, half of the target population thinks the benefit exceeds the risk and hence engages in the activity.  The graph peaks at r=0.95.  At that point one can check from the formula for N(r) that 53% of the target population will be engaging in the risky activity.

We can see from the graph that when the individual risk is between 0 and 0.95, then decreasing the risk r always decreases the total social risk T(r).  In other words we get the heuristic that when a majority (53% or more for my above numbers) of the members of the population are engaging in the risky behavior, we do not have to worry about increased social risk from a risk-reduction policy, assuming that the target population does not overestimate the effectiveness of the risk-reduction policy (remember that I assumed that the actual risk rate is known).

In particular, in the general American adult population, where most people drive, risk-reduction policies like seat-belts and anti-lock brakes are good.  This fits with common sense.

On the other hand, when the individual risk is between 0.95 and infinity, so that fewer than 53% of the target population is engaging in the risky behavior, a small decrease in the individual risk will increase T(r) by moving one closer to the peak, and hence will be counterproductive.

However, a large enough decrease in the individual risk will still put one on the left side of the peak, and hence could be productive.  But the decrease may have to be quite large.  For instance, suppose that the current individual risk is r=2.  In that case, 16% of the target population is engaging in the behavior (since r=2 is one standard-deviation away from the mean benefit assignment).  The per-capita social risk is then 0.16.  For a risk-reduction policy to be effective, it would then have to reduce the individual risk so that it is far enough to the left of the peak that the per-capita social risk is below 0.16.  Looking at the graph, we can see that this would require moving r from 2 to 0.18 or below.  In other words, we would need a policy that decreases individual risks by a factor of 11.

Thus, we get a heuristic.  For risky behavior that no more than half of the target population engages in, incremental risk-reduction (i.e., a small decrease in risk) increases the total social risk.  For risky behavior that no more than about 16% of the target population engages in, only a risk-reduction method that reduces individual risk by an order of magnitude will be worthwhile.

For comparison, condoms do not offer an 11-fold decrease in pregnancy rates.  The typical condom pregnancy rate in the first year of use is about 15%;  the typical no-contraceptive pregnancy rate is about 85%.  So condoms reduce the individual pregnancy risks only by a factor of about 6.

This has some practical consequences in the teen sex case.  Of unmarried 15-year-old teens, only 13% have had sex.  This means that risk-reduction policies aimed at 15-year-olds are almost certainly going to be counterproductive in respect of reducing risks, unless we have some way of decreasing the risks by a factor of more than 10, which we probably do not.  In that population, the effective thing to do is to focus on decreasing the incidence of the risky behavior rather than decreasing the risks of the behavior.

In higher age groups, the results may be different.  But even there, a one-size-fits-all policy is not optimal.  The sexual activity rates differ from subpopulation to subpopulation.  The effectiveness with regard to the reduction of social risk depends on details about the target population.  This suggests that the implementation of risk-reduction measures might be best assigned to those who know the individuals in question best, such as parents.

In summary, given my model:
  • When a majority of the target population engages in the risky behavior, both incremental and significant risk-reduction policies reduce total social risk.
  • When a minority of the target population engages in the risky behavior, incremental risk-reduction policies are counterproductive, but sufficiently effective non-incremental risk-reduction policies can be effective.
  • When a small minority--less than about 16%--engages in the risky behavior, only a risk-reduction policy that reduces the individual risk by an order of magnitude is going to be effective;  more moderately successful risk-reduction polices are counterproductive.

Wednesday, January 14, 2009

Risks

Consider the following ordinary form of argument. Risk R1 is less than risk R0. Risk R0 is a reasonable risk to take. Therefore, risk R1 is as well. Typically, R0 is a risk that reasonable people routinely take, without worrying about it. Thus, we might be told that flying is safer than driving, and since (this premise is usually implicit) it is reasonable to risk driving, it is reasonable to risk flying.

This is a bad form of argument as it stands, for two reasons. The first is that it needs to be established that the benefits associated with taking risk R1 are no less than those of risk R0. The second is that even if the benefit claim is true, this only shows that it would be rational to take R1 instead of of R0. It does not show that it would be rational to take R1 in addition to R0. (Thus, it would be reasonable to fly regularly instead of driving regularly. But given that one is driving regularly, it does not follow from the risk-comparison alone that it is reasonable also to fly.)

Here's an example to show the second failure. The fatality rate for Mount Everest climbs is apparently about 9%. Let us suppose that Kenya is a single woman, with no family, and who does not do any job for which she is essential (e.g., she is not an irreplaceable top cancer researcher). It might (I actually doubt it) be reasonable for Kenya to climb Mount Everest, for the sake of the various goods instantiated by the climb, despite the 9% risk of death. But if the above argument-form were sound, it would be likewise reasonable for her to additionally engage in another activity that carries an 8.9% risk of death and has similar benefits. But the argument could then be iterated. If there was some third activity that carried an 8.8% risk of death, it would be reasonable to additionally do that. Therefore, by repeated application of the argument form, we would conclude that if it is reasonable for Kenya to climb Mount Everest, it would be reasonable for her to do climb Mount Everest and do A, B, C, D, E, F, G, H and I, each of which has a slightly lower risk of death than the previous. Let's say the risks are 9.0, 8.9, 8.8 and so on. Assuming independence (not quite right), her chance of surving all ten activities is less than 47%. But it seems that it is only reasonable to engage in a series of actions that one is less likely to survive than not for the gravest of reasons (such as saving someone's life). The sorts of reasons involved in the climb of Mount Everest are not like that, and even having the benefits ten times over is not worth it.

A related, but I think not identical, issue is that benefits need not be additive. The benefit of both A and B need not be the sum of the benefits of A and of B in isolation. It might be more, or it might be less.