Showing posts with label killing. Show all posts
Showing posts with label killing. Show all posts

Wednesday, October 9, 2024

Proportionality and deterrence

There are many contexts where a necessary condition of the permissibility of a course of action is a kind of proportionality between the goods and bads resulting from the course of action. (If utilitarianism is true, then given a utilitarian understanding of the proportionality, it’s not only necessary but sufficient for permissibility.) Two examples:

  • The Principle of Double Effect says it is permissible to do things that are foreseen to have a basic evil as an effect, if that evil is not intended, and if proportionality between the evil effect and the good effects holds.

  • The conditions for entry into a just war typically include both a justice condition and a proportionality condition (sometimes split into two conditions, one about likely consequences of the war and the other about the probability of victory).

But here is an interesting and difficult kind of scenario. Before giving a general formulation, consider the example that made me think about this. Country A has a bellicose neighbor B. However, B’s regime while bellicose is not sufficiently evil that on a straightforward reading of proportionality it would be worthwhile for A to fight back if invaded. Sure, one would lose sovereignty by not fighting back, but B’s track record suggests that the individual citizens of A would maintain the freedoms that matter most (maybe this is what it would be like to be taken over by Alexander the Great or Napoleon—I don’t know enough of history to know), while a war would obviously be very bloody. However, suppose that a policy of not fighting back would likely result in an instant invasion, while a policy of fighting back would have a high probability of resulting in peace for the foreseeable future. We can then imagine that the benefits of likely avoiding even a non-violent takeover by B outweigh the small risk that despite A’s having a policy of armed resistance B would still invade.

The general case is this: We have a policy that is likely to prevent an unhappy situation, but following through on the policy violates a straightforward reading of proportionality if the unhappy situation eventuates.

One solution is to take into account the value of follwing through on the policy with respect to one’s credibility in the future. But in some cases this will be a doubtful justification. Consider a policy of fighting back against an invader—at least initially—even if there is no chance of victory. There are surely many cases of bellicose countries that could successfully take over a neighbor, but judge that the costs of doing so are too high given the expected resistance. But if the neighbor has such a policy, then in case the invasion nonetheless eventuates, whatever is done, sovereignty will be lost, and the policy will be irrelevant in the future. (One might have some speculation about the benefits for other countries of following through on the policy, but that’s very speculative.)

One line of thought on these kinds of cases is that we need to forego such policies, despite their benefits. One can’t permissibly act on them, so one can’t have them, and that’s that. This is unsatisfying, but I think there is a serious chance that this is right.

One might think that the best of both worlds is to make it seem like one has the policy, but not in fact have it. A problem with this is that it might involve lying, and I think lying is wrong. But even aside from that, in some cases this may not be practicable. Imagine training an army to defend one’s country, and then having a secret plan, known only to a very small number of top commanders, that one will surrender at the first moment of an invasion. Can one really count on that surrender? The deterrent policy is more effective the fiercer and more patriotic the army, but those factors are precisely likely to make them fight despite the surrender at the top.

Another move is this. Perhaps proportionality itself takes into account not just the straightforward computation of costs and benefits, but also the value of remaining steadfast in reasonably adopted policies. I find this somewhat attractive, but this approach has to have limits, and I don’t know where to draw them. Suppose one has invented a weapon which will kill every human being in enemy territory. Use of this weapon, with a Double Effect style intention of killing only the enemy soldiers, is clearly unjustified no matter what policies one might have, but a policy to use this weapon might be a nearly perfect protection against invasion. (Obviously this connects with the question of nuclear deterrence.) I suppose what one needs to say is that the importance of steadfastness in policies affects how proportionality evaluation go, but should not be decisive.

I find myself pulled to the strict view that policies we should not have policies acting on which would violate a straightforward reading of proportionality, and the view that we should abandon the straightforward reading of proportionality and take into account—to a degree that is difficult to weigh—the value of following policies.

Thursday, August 1, 2024

Double effect and causal remoteness

I think some people feel that more immediate effects count for more than more remote ones in moral choices, including in the context of the Principle of Double Effect. I used to think this is wrong, as long as the probabilities of effects are the same (typically more remote effects are more uncertain, but we can easily imagine cases where this is not so). But then I thought of two strange trolley cases.

In both cases, the trolley is heading for a track with Fluffy the cat asleep on it. The trolley can be redirected to a second track on which an innocent human is sleeping. Moreover, in a nearby hospital there are five people who will die if they do not receive a simple medical treatment. There is only one surgeon available.

But now we have two cases:

  1. All five people love Fluffy very much and have specified that they consent to life-saving treatment if and only if Fluffy is alive. The surgeon refuses to perform surgery that the patients have not consented to.

  2. The surgeon loves Fluffy and after hearing of the situation has informed you that they will perform surgery if and only if Fluffy is alive.

In both cases, I am rather uncomfortable with the idea of redirecting the trolley. But if we don’t take immediacy into account, both cases seem straightforward applications of Double Effect. The intention in both cases is to save five human lives by saving Fluffy, with the death of the person on the second track being an unintended side-effect. Proportionality between the good and the bad effects seems indisputable.

However, in both cases, redirecting the trolley leads much more directly to the death of the one person than to the saving of the five. The causal chain from redirection to life-saving in both cases is mediated by the surgeon’s choice to perform surgery. (In Case 1, the surgeon is reasonable and in Case 2, the surgeon is unreasonable.) So perhaps in considerations of proportionality, the more immediate but smaller bad effect (the death of the person on the side-track) outweighs the more remote but larger good effect (the saving of the five).

I can feel the pull of this. Here is a test. Suppose we make the death of the sixth innocent person equally indirect, by supposing instead that Rover the dog is on the second track, and is connected to someone’s survival in the way that Fluffy is connected to the survival of the five. In that case, it seems pretty plausible that you should redirect. (Though I am not completely certain, because I worry that in redirecting the trolley even in this case you are unduly cooperating with immoral people—the five people who care more about a cat than about their own human dignity, or the crazy surgeon.)

If this is right, how do we measure the remoteness of causal chains? Is it the number of independent free choices that have to be made, perhaps? That doesn’t seem quite right. Suppose that we have a trolley heading towards Alice who is tied to the track, and we can redirect the trolley towards Bob. Alice is a surgeon needed to save ten people. Bob is a surgeon needed to save one. However, Alice works in a hospital that has vastly more red tape, and hence for her to save the ten people, thirty times as many people need to sign off on the paperwork. But in both cases the probabilities of success (including the signing off on the paperwork) are the same. In this case, maybe we should ignore the red tape, and redirect?

So the measure of the remoteness of causal chains is going to have to be quite complex.

All this confirms my conviction that the proportionality condition in Double Effect is much more complex than initially seems.

Thursday, July 25, 2024

Aggression and self-defense

Let’s assume that lethal self-defense is permissible. Such self-defense requires an aggressor. There is a variety of concepts of an aggressor for purposes of self-defense, depending on what constitutes aggression. Here are a few accounts:

  1. voluntarily, culpably and wrongfully threatening one’s life

  2. voluntarily and wrongfully threatening one’s life

  3. voluntarily threatening one’s life

  4. threatening, voluntarily or involuntarily, one’s life.

(I am bracketing the question of less serious threats, where health but not life is threatened.)

I want to focus on accounts of self-defense on which aggression is defined by (4), namely where there is no mens rea requirement at all on the threat. This leads to a very broad doctrine of lethal self-defense. I want to argue that it is too broad.

First note that it is obvious that a criminal is not permitted to use lethal force against a police officer who is legitimately using lethal force against them. This implies that even (3) is too lax an account of aggression for purposes of self-defense, and a fortiori (4) is too lax.

Second, I will argue against (4) more directly. Imagine that Alice and Bob are locked in a room together for a week. Alice has just been infected with a disease which would do her no harm but would kill Bob. If Alice dies in the next day, the disease will not yet have become contagious, and Bob’s life will be saved. Otherwise, Bob will die. By (4), Bob can deem Alice an aggressor simply by her being alive—she threatens his life. So on an account of self-defense where (4) defines aggression, Bob is permitted to engage in lethal self-defense against Alice.

My intuitions say that this is clearly wrong. But not everyone will see it this way, so let me push on. If Bob is permitted to kill Alice because aggression doesn’t have a mens rea requirement, Alice is also permitted to lethally fight back against Bob, despite the fact that Bob is acting permissibly in trying to kill her. (After all, Alice was also acting permissibly in breathing, and thereby staying alive and threatening Bob.) So the result of a broad view of self-defense against any kind of threat, voluntary or not, is situations where two people will permissibly engage in a fight to the death.

Now, it is counterintuitive to suppose that there could be a case where two people are both acting justly in a fight to the death, apart from cases of non-moral error (say, each thinks the other is an attacking bear).

Furthermore, the result of such a situation is that basically the stronger of the two gets to kill the weaker and survive. The effect is not literally might makes right, but is practically the same. This is an implausibly discriminatory setup.

Third, consider a more symmetric variant. Two people are trapped in a spaceship that has only air enough for one to survive until rescue. If (4) is the right account of aggression, then simply by breathing each is an aggressor against the other. This is already a little implausible. Two people in a room breathing is not what one normally thinks of as aggression. Let me back this intuition up a little more. Suppose that there is only one person trapped in a spaceship, and there is not enough air to survive until rescue. If in the case of two people each was engaging in aggression against the other simply by virtue of removing oxygen from air to the point where the other would die, in the case of one person in the spaceship, that person is engaging in aggression against themselves by removing oxygen from air to the point where they themselves will die. But that’s clearly false.

I don’t know exactly how to define aggression for purposes of self-defense, but I am confident that (4) is much too broad. I think the police officer and criminal case shows that (3) is too broad as well. I feel pulled towards both (1) and (2), and I find it difficult to resolve the choice between them.

Thursday, May 2, 2024

From fetal pain to the impermissibility of abortion

At some point in pregnancy it is widely acknowledged that fetuses start to feel pain. Estimates of this point vary from around seven to thirty weeks of gestation.

We cannot directly conclude from the fact that some fetus can feel pain that killing that fetus is impermissible. For it seems permissible, given good reason, to humanely kill a conscious non-human animal. But perhaps there is an indirect argument. I want to try out one.

It has been argued that if the fetus is the same individual as the adult person that the fetus would grow into, then it is wrong to kill the fetus for the same reason that it is wrong to kill the adult: the victim is the same, and no more deserving of death, while the harm of death is greater (the fetus is deprived of a greater chunk of life).

But if a fetus can feel pain, then this offers significant support for the hypothesis that the fetus is the same individual as the resultant adult. Imagine the fetus has a constant minor chronic pain, is carried to term, and grows into an adult, without ever any relief to the pain. The adult will then feel the pain. If the fetus is not the same individual as the adult, there are two possibilities at the time of adulthood:

  1. There are two beings feeling pain: the adult and the grown-up fetus.

  2. At some point the grown-up fetus had perished and was replaced by a new individual feeling pain.

Option (1) seems crazy: if I have a headache while sitting alone on the sofa, there is only one entity in pain on the sofa, namely me, rather than me and some grown-up fetus. Option (2) is also rather implausible. On our hypothesis we have the continuous presence of a brain state correlated with pain, and yet allegedly at some point the individual with the pain perishes and a new individual inherits the brain with the pain. That doesn’t seem right.

If we reject both (1) and (2), we have to conclude that the fetus in pain is the same individual as the adult that it grows up into. And thus we conclude that at least once fetuses are capable of pain, abortion is wrong.

This argument doesn’t say anything about what happens prior to the possibility of fetal pain. I think that is still the same individual, but that requires another argument.

Tuesday, April 30, 2024

Killing and consent

I think it’s wrong for us to kill innocent people. Some fellow deontologists, however, think this prohibition should be restricted to say that it’s wrong for us to kill nonconsenting innocent people. These thinkers hold that it is both permissible to consent to being killed and to kill those who have given such consent (except in special cases, such as when the victim has overriding unfulfilled duties to others).

I want to argue for a curious consequence of this restriction of the prohibition of murder while maintaining deontology.

By “sacrificing one’s life to save lives”, I will mean actions which save lives but have one’s own death as an unintended but foreseen side-effect. For instance, jumping in front of a train to push a child out of the way. Everyone agrees it’s typically praiseworthy, and hence permissible, to sacrifice your life to save an innocent life. Most people, however, will say that it is supererogatory to do so. It is brave to do it, but not cowardly to omit it.

But now consider cases where by sacrificing your life you can save a larger number of innocent lives, say a dozen. It is pretty plausible that it would be cowardly to refrain from the sacrifice, and I suspect it would be wrong to do it except in special cases (such as when you have just figured out how to cure cancer). But I agree that the point is not completely clear to me. However, it is quite clear to me that it would be wrong to refuse to sacrifice your life to save a dozen people when that dozen includes one’s spouse and one’s children (again, with some very rare exceptions).

Now let’s assume the view that it is permissible to consent to being killed and permissible to kill the consenting. Consider a classic deontology case: a terrorist says that if you don’t kill Bob, a dozen other innocent people will be killed. Add that the dozen people include Bob’s spouse and children. If it’s permissible to kill the consenting, then if Bob were to consent, it would be permissible to kill him. But Bob expressly and clearly refuses consent, despite his believing that it would be permissible to consent.

Assuming that it is morally required to sacrifice your life to save a dozen innocent lives when these lives include your spouse and children, it is very difficult to deny that if it is permissible to consent to being killed, in a case like the above, Bob would be morally required to consent to being killed. Granted, the sacrifice case does not include consenting to one’s death, while the terrorist case does. But as long as we have granted that it is permissible to consent to one’s death, the difference does not seem significant. Thus Bob is morally required to consent to being killed, given our assumptions about consensual killing. Bob’s refusal of consent is thus morally wrong. And very badly so: it causes eleven more lives to be lost, including his very own spouse and children. His refusal is about as bad as mass murder!

It seems that Bob is far from innocent. On the contrary, he is guilty of refusing to save the lives of eleven people, including his spouse and children. But now it seems that the prohibition against killing the innocent does not apply to Bob, and hence it is permissible—and maybe even obligatory—to kill Bob. If so, then the deontological prohibition on killing the innocent, if restricted to the nonconsenting, has a giant loophole: when enough is at stake, a nonconsenting victim is no longer innocent! Now, maybe, it is only permissible to kill the guilty when one acts on behalf of a state (and when enough is stake, which it is in this case). But it would still be very strange for a deontologist to think it permissible to kill Bob even should the state authorize it.

This is not a knockdown argument against the restriction of the prohibition of murder to nonconsenting victims. But it is some evidence against the restriction.

Thursday, January 11, 2024

A deontological asymmetry

Consider these two cases:

  1. You know that your freely killing one innocent person will lead to three innocent drowning people being saved.

  2. You know that saving three innocent drowning people will lead to your freely killing one innocent person.

It’s easy to imagine cases like (1). If compatibilism is true, it’s also pretty easy to imagine cases like (2)—we just suppose that your saving the innocent people produces a state of affairs where your psychology gradually changes in such a way that you kill one innocent person. If libertarianism and Molinism are true, we can also get (2): God can reveal to you the conditional of free will.

If libertarianism is true but Molinism is false, it’s harder to get (2), but we can still get it, or something very close to it. We can, for instance, imagine that if you rescue the three people, you will be kidnapped by someone who will offer increasingly difficult to resist temptations to kill an innocent person, and it can be very likely that one day you will give in.

Deontological ethics says that in (1) killing the innocent person is wrong.

Does it say that saving the three innocents is wrong in (2)? It might, but not obviously so. For the action is in itself good, and one might reasonably say that becoming a murderer is a consequence that is not disproportionate to saving the three lives. After all, imagine this variant:

  1. You know that saving three innocent drowning people will lead to a fourth person freely killing one innocent person.

Here it seems that it is at least permissible to save the three innocents. That someone will through a weird chain of events become a murderer if you save the three innocents does not make it wrong to save the three.

I am inclined to think that saving the three is permissible in (2). But if you disagree, change the three to thirty. Now it seems pretty clear to me that saving the drowning people is permissible in (2). But it is still wrong to kill an innocent person to save thirty.

Even on threshold deontology, it seems pretty plausible that the thresholds in (1) and (2) are different. If n is the smallest number such that it is permissible to save n drowning people, at the expense of a side-effect of your eventually killing one innocent, then it seems plausible that n is not big enough to make it permissible to kill one innocent to save n.

So, let’s suppose we have this asymmetry between (1) and (2), with the “three” replaced by some other number as needed (the same one in both statements), so that the action described in (1) is wrong but the one in (2) is permissible.

This then will be yet another counterexample to the project of consequentializing deontology: of finding a utility assignment that renders conclusions equivalent to those of deontology. For the consequences of (1) and (2) are the same, even if one assigns a very big disutility to killing innocents.

Thursday, July 20, 2023

Rachels on doing and allowing

Rachels famously gives us these two cases to argue that the doing–allowing distinction is morally vacuous:

  1. You stand to inherit money from a young cousin, so you push them into a tub so they drown.

  2. You stand to inherit money from a young cousin, and so when you see them drowning in a tub, you don’t pull them out.

The idea is that if there is a doing–allowing distinction, then (1) should be worse than (2), but they both seem equally wicked.

But it’s interesting to notice how things change if you change the reasons from profit to personal survival:

  1. A malefactor informs you that if you don’t push the young cousin into the tub so they drown, you will be shot dead.

  2. A malefactor informs you that if your currently drowning cousin survives, you will be shot dead.

It’s clear that it’s wrong to drown your cousin to save your life. But while it’s praiseworthy to rescue them at the expense of your life, unless you have a special obligation to them beyond cousinage, you don’t do wrong by failing to pull them out. And it seems that the relevant difference between (3) and (4) is precisely that between doing and allowing: you may not execute a drowning to save your life, but you may allow one.

Or consider this variant:

  1. A malefactor informs you that if you don’t push the young cousin into the tub so they drown, two other cousins will be shot dead.

  2. A malefactor informs you that if your currently drowning cousin survives, two other cousins will be short dead.

I take it that pretty much every non-consequentialist will agree that in (5) it’s wrong to drown your cousin, but everyone (consequentialist or not) will also say that in (6) it’s wrong to rescue your cousin.

So there is very good reason to think there is a morally relevant doing–allowing distinction, and cases similar to Rachels’ show it. At this point it is tempting to diagnose our intuitions about Rachels’ original case as based on the fact that the death of your cousin is not sufficiently good to justify allowing the drowning—their death is disproportionately bad for the benefit gained—so we want to blame the agent who cares about their financial good more than the life of their young cousin, an we don’t care whether they are actively or passively killing the cousin.

But things are more complicated. Consider this pair of cases:

  1. Your recently retired cousin has left all their money to famine relief where it will save fifty lives but if your cousin survives another ten years their retirement savings will be largely spent and won’t be enough to save any lives. So you push the cousin into the tub to drown them.

  2. Your recently retired cousin has left all their money to famine relief where it will save fifty lives but if your cousin survives another ten years their retirement savings will be largely spent and won’t be enough to save any lives. So when your cousin is drowning in the tub, you don’t rescue them.

Now it seems we have proportionality: your cousin’s death is not disproportionately bad given the benefit. Yet I have the strong intuition that it’s both wrong to drown them and to fail to save them. I can’t confidently put my finger on what is the relevant difference between (8), on the one hand, and (4) and (6), on the other hand.

But maybe it’s this. In (8), your rescue of your cousin isn’t a cause of the death of the people. The cause of their death is famine. It’s just that you have failed to prevent their death. On the other hand, in (4) and (6), if you rescue, you have caused your own death or the death of the two other cousins, admittedly by means of the malefactor’s wicked agency. In (8), rescuing blocks prevention of deaths; in (4) and (6), rescuring causes deaths. Blocking prevention is different from causing.

This is tricky, though. For drowning someone can be seen as blocking prevention of death. For their breathing prevents death and drowning blocks the breathing!

Maybe the difference lies between blocking a natural process of life-preservation (breathing, say) and blocking an artificial process of life-preservation (sending famine relief, say).

Or maybe I am mistaken about (4) and (6) being cases where rescue is not obligatory. Maybe in (4) and (6) rescue is obligatory, but it wouldn’t be if instead the malefactor told you that if you rescue, then the deadly consequences would follow. For maybe in (4) and (6), you are intending death, while in the modified cases, you are only intending non-rescue? I am somewhat sceptical.

There is a lot of hard stuff here, thus. But I think there is still enough clarity to see that there is a difference between doing and allowing in some cases.

Friday, December 2, 2022

Moderately pacifist war

I’ve been wondering whether it is possible for a country to count as pacifist and yet wage a defensive war. I think the answer is positive, as long as one has a moderate pacifism that is opposed to lethal violence but not to all violence. I think that a prohibition of all violence is untenable. It seems obvious that if you see someone about to shoot an innocent person, and you can give the shooter a shove to make them miss, you presumptively should.

Here’s what could be done by a moderately pacifist country.

First, we have “officially” non-lethal weapons: tasers, gas, etc. Some of these might violate current international law, but it seems that a pacifist country could modify its commitment to some accords.

Second, “lethal” weapons can be used less than lethally. For instance, with modern medicine, abdominal gunshot wounds are only 10% fatal, yet they are no doubt very effective at stopping an attacker. While it may seem weird to imagine a pacifist shooting someone in the stomach, when the chance of survival is 90%, it does not seem unreasonable to say that the pacifist could be aiming to stop the attacker non-lethally. After all, tasers sometimes kill, too. They do so less than 0.25% of the time, but that’s a difference of degree rather than of principle.

Third, we might subdivide moderate pacifists based on whether they prohibit all violence that foreseeably leads to death or just violence that intentionally leads to death. If it is only intentionally lethal violence that is forbidden, then quite a bit of modern warfare can stand. If the enemy is attacking with tanks or planes, one can intentionally destroy the tank or plane as a weapon, while only foreseeing, without intending, the death of the crew. (I don’t know how far one can take this line without sophistry. Can one drop a bomb on an infantry unit intending to smash up their rifles without intending to kill the soldiers?) Similarly, one can bomb enemy weapons factories.

Whether such a limited way of waging war could be successful probably depends on the case. If one combined the non-lethal (or not intentionally lethal) means with technological and numerical superiority, it wouldn’t be surprising to me if one could win.

Thursday, December 1, 2022

Against a moderate pacifism

Imagine a moderate pacifist who rejects lethal self-defense, but allows non-lethal self-defense when appropriate, say by use of tasers.

Now, imagine that one person is attacking you and nine other innocents, with the intent of killing the ten of you, and you can stop them with a taser. Surely you should, and surely the moderate pacifist will say that this is an appropriate use case for the taser.

Very well. Now consider this on a national level. Suppose there are a million enemy soldiers ordered to commit genocide against ten million, and you have two ways to stop them:

  1. Tase the million soldiers.

  2. Kill the general.

If you can tase one person to stop the murder of ten, then (1) should be permissible if it’s the only option. But tasers occasionally kill people. We don’t know how often. Apparently it’s less than 1 in 400 uses. Suppose it’s 1 in 4000. Then option (1) results in 250 enemy deaths.

So maybe our choice is between tasing a million, thereby non-intentionally killing 250 soldiers, and intentionally killing one general. It seems to me that (2) is morally preferable, even though our moderate pacifist has to allow (1) and forbid (2).

Note that a version of this argument goes through even if the moderate pacifist backs up and says that tasers are too lethal. For suppose instead of tasers we have drones that destroy the dominant hand of an enemy soldier while guaranteeing survival (with science fictional medical technology). It’s clearly right to release such a drone on a soldier who is about to kill ten innocents. But now compare:

  1. Destroy the dominant hand of a million soldiers.

  2. Kill the general.

I think (4) is still morally preferable to causing the kind of disruption to the lives of a million people that plan (3) would involve.

These may seem to be consequentialist arguments. I don't think so. I don't have the same intuitions if we replace the general by the general's innocent child in (2) and (4), even if killing the child were to stop the war (e.g., by making the general afraid that their other children would be murdered).

Monday, September 19, 2022

More on proportionality in Double Effect and prevention

In my previous post, I discuss cases where someone is doing an evil for the sake of preventing significantly worse goods—say, murdering a patient to save four others with the organs from the one—and note that a straightforward reading of the Principle of Double Effect’s proportionality condition seems to forbid one from stopping that evil. I offer the suggestion, due to a graduate student, that failure to stop the evil in such cases implies complicity with the evils.

I now think that complicity doesn’t solve the problem, because we can imagine case where there is no relevant evildoer. Take a trolley problem where the trolley is coming to a fork and about to turn onto the left track and kill Alice. There is no one on the right track. So far this is straightforward and doesn’t involve Double Effect at all—you should obviously redirect the trolley. But now add that if Alice dies, four people will be saved with her organs, and if Alice lives, they will die.

Among the results of redirecting the trolley, now, are the deaths of the four who won’t be saved, and hence Double Effect does apply. To save one person at the expense of four is disproportionate, and so it seems that one violates Double Effect in saving the one. And in this case, a failure to save Alice would not involve any complicity in anyone else’s evildoing.

It is tempting to say that the deaths of the four are due to their medical condition and not the result of trolley redirection, and hence do not count for Double Effect proportionality purposes. But now imagine that the four people can be saved with synthetic organs, though only if the surgery happens very quickly. However, the only four surgeons in the region are all on an automated trolley, which is heading towards the hospital along the left track, is expected to kill Alice along the way, but will continue on until it stops at the hospital. If the trolley is redirected on the right path, it will go far away and not reach the hospital in time.

In this case, it does seem correct to say that Double Effect forbids one from redirecting the trolley—you should not stop the surgeons’ trolley even if a person is expected to die from a trolley accident along the way. (Perhaps you are unconvinced if the number of patients needing to be saved is only four. If so, increase the number.) But for Double Effect to have this consequence, the deaths of the of the patients in the hospital have to count as effects of your trolley redirection.

And if the deaths count in this case, they should count in the original case where Alice’s organs are needed. After all, in both cases the patients die of their medical condition because the trolley redirection has prevented the only possible way of saving them.

Here’s another tempting response. In the original version of the story, if one refrains from redirecting the trolley in light of the people needing Alice’s organs, one is intending that Alice die as a means to saving the four, and hence one is violating Double Effect. But this response would not save Double Effect: it would make Double Effect be in conflict with itself. For if my earlier argument that Double Effect prohibits redirecting the trolley stands, and this response does nothing to counter it, then Double Effect both prohibits redirecting and prohibits refraining from redirecting!

I think what we need is some careful way of computing proportionality in Double Effect. Here is a thought. Start by saying in both versions of the case that the deaths of the four patients are not the effects of the trolley redirection. This was very intuitive, but seemed to cause a problem in the delayed-surgeons version. However, there is a fairly natural way to reconstrue things. Take it that leaving the trolley to go along the left track results in the good of saving the four patients. So far we’ve only shifted whether we count the deaths of the four as an evil on the redirection side of the ledger or the saving of the four as a good on the non-redirection side. This makes no difference to the comparison. But now add one more move: don’t count goods that result from evils in the ledger at all. This second move doesn’t affect the delayed-surgeons case. For the good of saving lives in that case is not a result of Alice’s death, and the proportionality calculation is unaffected. In particular, in that case we still get the correct result that you should not redirect the trolley, since the events relevant to proportionality are the evil of Alice’s death and the good of saving four lives, and so preventing Alice’s death is disproportionate. But in the organ case, the good of saving lives is a result of Alice’s death. So in that case, Double Effect’s proportionality calculation does not include the lives saved, and hence, quite correctly, we conclude that you should redirect to save Alice’s life.

Maybe. But I am not sure. Maybe my initial intuition is wrong, and one should not redirect the trolley in the organ case. What pulls me the other way is the hungry bear case here.

Wednesday, August 17, 2022

Murder without an intention of harm

I used to think that every murder is an intentional killing. But this is incorrect: beheading John the Baptist was murder even if the intention was solely to put his head on a platter rather than to kill him. Cases like that once made me think something like this: murder is an intentional injury that one expects to be lethal. (Cf. Ramraj 2000.)

But now I think there can be cases of murder where there is no intent to injure at all. Suppose that amoral Alice wants to learn what an exploding aircraft looks like. To that end, she launches an anti-aircraft missile at a civilian jetliner. SHe has the ordinary knowledge that the explosion will kill everyone on board, but in her total amorality she no more intends this than the ordinary person intends to contribute to wearing out shoes when going for a walk. Alice has committed murder, but without any intention to kill.

In terms of the Principle of Double Effect, Alice’s wrongdoing lies in the lack of proportionality between the foreseen gravely bad effect (mass slaughter) and the foreseen trivially good effect (satisfaction of desire for unimportant knowledge), rather than in a wrongful intention, at least if we bracket questions of positive law.

It is tempting to conclude that every immoral killing is a murder. But that’s not right, either. If Bob is engaged in a defensive just war and has been legitimately ordered not to kill any of the enemy before 7 pm no matter what (so as not to alert the enemy, say), and at 6 pm he kills an enemy invader in self-defense, then he does not commit murder, but he acts wrongly in disobeying an order.

It seems that for an immoral act to be a murder it needs to be wrong because of the lethality of the harm as such, rather than due to some incidental reason, such as the lethality of the harm as contrary to a valid order.

Thursday, March 3, 2022

The law enforcement model of war

When an army invades a country, the invaders break a massive number of that country’s laws. They kill, commit assault and battery with deadly weapons, recklessly endanger lives, and destroy property, and do all this as part of a concerted conspiracy. And they break all sorts of more minor laws, by carrying unlicensed weapons, trespassing on government property, littering, and presumably violating traffic laws all the time (I assume one can’t slow down the progress of a column of tanks by putting up a stop sign).

This means that there is an intermediate position between just war theory and complete non-violence. This intermediate position holds that law enforcement can appropriately make use of violent means, including lethal violence when this is proportionate, and that when one’s country suffers an unjust invasion, one may (and maybe often should) legitimately engage in appropriately violent law enforcement activities against the enemy. Call this the law enforcement model of war.

I do not advocate the law enforcement model, but it is interesting to think just how it would differ from the two more standard options.

The difference from complete non-violence is clear: on the law enforcement model, violent action in defense of the country’s laws is permitted whenever proportionate. Thus, it would be permissible to destroy a tank because it is a part of a conspiracy to commit murder and destruction of property, but not because it is simply refusing to stop at a stop sign. (And discretion being the better part of valor, issuing traffic tickets in the latter case is probably not a good idea.)

The differences from just war theory are also significant, though more nuanced. While in most cases where traditional just war theory permits a defensive war, the law enforcement model would also permit defensive violence, there would be significant limitations on offensive wars, due to the fact that law enforcement is bound by significant limitations of jurisdiction. While on traditional just war theory, the fact that Elbonia is violently persecuting a Kneebonian ethnic minority on Elbonian soil might be a sufficient just cause for a war, on the law enforcement model, execrable as such persection is, it is likely to be outside of the proper jurisdiction of Kneebonian state law enforcement. Indeed, there may have to be significant limits to extraterritorial defensive operations, though some version of the doctrine of “hot pursuit” may be helpful here.

Interestingly, the law enforcement model in one respect seems to point to greater violence. The presumption in law enforcement is that criminals are not only stopped but also punished. This suggests that on a law enforcement model, we would have a presumption in favor of putting all captured invading soldiers on trial. However, even in ordinary law enforcement, punishment is only a presumption, and it can be waived for the sake of significant public goods. In the special case of defending against invaders, having a general waiver, with exceptions tailored to mitigate the worst of evils (say, attacks on defenseless populations), in order to encourage the enemy to surrender would be prudent.

In regard to the waiver of criminal penalties, it is interesting to note one difference between how we feel about war and about ordinary crime. In the case of ordinary private crime, we do not feel that it is qualitatively worse when the criminal murders a civilian rather than a police officer—indeed, we tend to feel that there is something particularly bad about murdering a police officer. In the case of war, however, we do feel that it is much worse to kill civilians. On the law enforcement model, this difference in attitudes does not seem right. But the difference can still be defended within the law enforcement model. In typical cases, soldiers fighting an unjust war are subject to incessant propaganda that they are fighting for justice. There is thus a significant probability that they are not culpable for killing enemy soldiers, because they are rationally convinced that justice requires enemy soldiers to be stopped with lethal force. But it is much harder to come to a rational conviction that justice requires enemy civilians to be stopped with lethal force. Therefore, even if in an unjust war the intrinsic wrong of killing soldiers is just as great as that of killing civilians, there is a significant difference in likely culpability.

As I said, I do not endorse the law enforcement model. But I think it is an interesting model. And I think it presents a significant challenge to those pacifists who think that law enforcement violence is sometimes justified but that violence in war never is.

Tuesday, November 16, 2021

Subjective guilt and war

One of the well-known challenges in accounting for killing in a just war is the thought that even soldiers fighting on a side without justice think they have justice on their side, hence are subjectively innocent, and thus it seems wrong to kill them.

But I wonder if there isn’t an opposite problem. As is well-known, human beings have a very strong visceral opposition to killing. Even those who kill with justice on their side are apt to feel guilty, and it wouldn’t be surprising if often they not only feel guilty but judge themselves to have done wrong. Thus, it could well be that soldiers who kill on both sides of a war have a tendency to be subjectively guilty, even if one of the sides is waging a just war.

Or perhaps things work out this way: Soldiers who kill tend to be subjectively guilty unless they are waging a clearly just war. If so, then those who are on a side without justice are indeed apt to be subjectively guilty, since rarely does a side without justice appear manifestly just. And those who are on a side with justice are may very well also be subjectively guilty, unless the war is one of those where justice is manifest (as was the case for the Allies in World War II).

I doubt that things work out all that neatly.

In any case, the above considerations do show that a side with justice has very strong moral reason to make that justice as manifest as possible to the soldiers. And when that is not possible, those in charge should be persons of such evident integrity that it is easy to trust their judgment.

Friday, April 23, 2021

More on doing and allowing

Let’s suppose disease X if medically unchecked will kill 4.00% of the population, and there is one and only one intervention available: a costless vaccine that is 100% effective at preventing X but that kills 3.99% of those who take it. (This is, of course, a very different situation than the one we are in regarding COVID-19, where we have extremely safe vaccines.) Moreover, there is no correlation between those who would be killed by X and those who would be killed by the vaccine.

Assuming there are no other relevant consequences (e.g., people’s loss of faith in vaccines leading to lower vaccine uptake in other cases), a utilitarian calculation says that the vaccine should be used: instead of 316.0 million people dying, 315.2 million people would die, so 800,000 fewer people would die. That’s an enormous benefit.

But it’s not completely clear that this costless vaccine should be promoted. For the 315.2 million who would die from the vaccine would be killed by us (i.e., us humans). There is at least a case to be made that allowing 316.0 million deaths is preferable to causing 315.2 million. The Principle of Double Effect may justify the vaccination because the deaths are not intentional—they are neither ends nor means—but still one might think that there is a doing/allowing distinction that favors allowing the deaths.

I am not confident what to say in the above case. But suppose the numbers are even closer. Suppose that we have extremely precise predictions and they show that the hypothetical costless vaccine would kill exactly one less person than would be killed by X. In that case, I do feel a strong pull to thinking this vaccine should not be marketed. On the other hand, if the numbers are further apart, it becomes clearer to me that the vaccine is worth it. If the vaccine kills 2% of the population while X kills 4%, the vaccine seems worthwhile (assuming no other relevant consequences). In that case, wanting to keep our hands clean by refusing to vaccinate would result in 158 million more people dying. (That said, I doubt our medical establishment would allow a vaccine that kills 2% of the population even if the vaccine would result in 158 million fewer people dying. I think our medical establishment is excessively risk averse and disvalues medically-caused deaths above deaths from disease to a degree that is morally unjustified.)

From a first-person view, though, I lose my intuition that if the vaccine only kills one fewer person than the disease, then the vaccine should not be administered. Suppose I am biking and my bike is coasting down a smooth hill. I can let the bike continue to coast to the bottom of the hill, or I can turn off into a side path that has just appeared. Suddenly I acquire the following information: by the main path there will be a tiger that has a 4% chance of eating any cyclist passing by, while by the side path there will be a different tiger that has “only” a 3.99999999% chance of eating a cyclist. Clearly, I should turn to the side path, notwithstanding the fact that if the tiger on the side path eats me, it will have eaten me because of my free choice to turn, while if the tiger on the main path eats me, that’s just due to my bike’s inertia. Similarly, then, if the vaccine is truly costless (i.e., no inconvenience, no pain, etc.), and it decreases my chance of death from 4% to 3.99999999% (that’s roughly what a one-person difference worldwide translates to), I should go for it.

So, in the case where the vaccine kills only one fewer person than the disease would have killed, from a first-person view, I get the intuition that I should get the vaccine. From a third-person view, I get the intuition that the vaccine shouldn’t be promoted. Perhaps the two intuitions can be made to fit together: perhaps the costless vaccine that kills only one fewer person should not be promoted, but the facts should be made public and the vaccine should be made freely available (since it is costless) to anyone who asks for it.

This suggests an interesting distinction between first-person and third-person decision-making. The doing/allowing distinction, which favors evils not of our causing over evils of our causing even when the latter are non-intentional, seems more compelling in third-person cases. And one can transform third-person cases to be more like first-person through unencouraged informed consent perhaps.

(Of course, in practice, nothing is costless. And in a case where there is such a slight difference in danger as 4% vs. 3.99999999%, the costs are going to be the decisive factor. Even in my tiger case, if we construe it realistically, the effort and risk of making a turn on a hill will override the probabilistic benefits of facing the slightly less hungry tiger.)

Monday, March 22, 2021

Doing, refraining and Double Effect

The Principle of Double Effect seems to imply that either there are real dilemmas—cases where an action is both forbidden and required—even for agents who have always been virtuous and well-informed, or else there is a morally significant distinction between doing and refraining.

Here is the argument. Consider two cases. In both cases, you know that teenage and now innocent Adolf will kill tens of millions of innocents unless he dies now.

  1. Adolf is drowning. You can throw him a life-preserver.

  2. Adolf is on top of a cliff. You can give him a push.

Double Effect prohibits throwing Adolf a life-preserver. For Double Effect says that an action that has good and bad foreseen consequences is only permissible when the bad effects are proportionate to the good effects. But the deaths of tens of millions of innocents are disproportionate to the life of one innocent teenager.

Now, I take it that in case 2, it is wrong to push Adolf over the precipice. Double Effect certainly agrees: pushing him over the precipice is intentionally doing an evil as a means to a good.

If there is no morally significant distinction between doing and refraining, then it seems that refusal to throw a life-preserver in the drowning case is just like pushing in the cliff case: both are done in order that Adolf might die before he kills tens of millions. If in the cliff case we are forbidden from pushing, then in the drowning case we are forbidden from not throwing the life-preserver. But at the same time, Double Effect forbids throwing the life-preserver. So we must throw and not throw. Thus, the drowning case becomes a real dilemma—and it remains one even if the agent has always been virtuous and well-informed.

I find it very plausible that there are no moral dilemmas for agents who have always been virtuous and well-informed. (Vicious agents might face dilemmas due to accepting incompatible commitments. And agents with mistaken conscience might be in dilemmas unawares, because their duties to conscience might conflict with “objective” duties.) I also think the Principle of Double Effect is basically correct.

This seems to push me to accept a morally significant distinction between action and abstention: it is not permissible to push teenage Adolf off the cliff, but it is permissible—and required—not to throw a life-preserver to him when he is drowning.

But perhaps there is a distinction to be drawn between the two cases that is other than a simple doing/refraining distinction. In the cliff case, presumably one’s purpose in pushing Adolf is that he should die. If he survives, one has failed. But in the drowning case, it is not so clear that one’s purpose in not throwing the life-preserver is that Adolf should drown. Rather, the purpose in not throwing the life-preserver is to refrain from violating Double Effect. Suppose that Adolf survives despite the lack of a life-preserver. Then one has still been successful: one has refrained from violating Double Effect.

Nonetheless, this is still basically a doing/refraining distinction, just a more subtle one. Double Effect requires one to refrain from disproportionate actions—ones whose foreseen evil effects are disproportionate to their foreseen good effects. But Double Effect does not require one to refrain from disproportionate refrainings. For if Double Effect were to require one to refrain from disproportionate refrainings, then in the cliff case, it would require one to refrain from refraining from pushing—i.e., it would require one to push. And it would require one not to push, thereby implying a real dilemma. But in the cliff case, classical Double Effect straightforwardly says not to push. (Things are a little different in threshold deontology, but given threshold deontology we can modify the case to reduce the number of deaths of innocents resulting from Adolf’s survival and the point should still go through.)

In fact, this last point shows that embracing real dilemmas probably will not help a friend of Double Effect avoid a doing/refraining distinction. For even if there are real dilemmas, the cliff case is not one of them: pushing is straightforwardly impermissible.

It is tempting to conclude from this that Double Effect only applies to doings and not refrainings. But that might miss something of importance, too. Double Effect gives necessary conditions for the permissibility of a doing that has foreseen evil effects and an intended good effect:

  1. the evil is not a means to the intended good

  2. the action is intrinsically neutral or good

  3. the evil is not disproportionate to the intended good.

The argument above shows that (5) is not a necessary condition for the permissibility of a refraining. It seems that all refrainings are intrinsically neutral. So, (4) may be vacuous for refrainings. But it is still possible that (3) is true both for doings and refrainings. Thus, while it is permissible to refrain from throwing the life-preserver, perhaps one’s aim in refraining should not be the death of Adolf, but rather the avoidance of doing something disproportionate. And even if (5) is not a necessary condition for the permissibility of a refraining, there may be some weaker proportionately condition on refrainings. Indeed, that has to be right, since it’s wrong to refrain to pull out a drowning child simply to save one’s clothes, as Singer has pointed out. I don’t know how to formulate the proportionateness constraint correctly in the refraining case.

We thus have two Double Effect positions available on doing and refraining. One position says that Double Effect puts constraints on doings but not on refrainings. The subtler position says that Double Effect puts more constraints on doings.

Thursday, February 18, 2021

Moral risk

Say that an action is deontologically doubtful (DD) provided that the probability of the action being forbidden by the correct deontology is significant but less than 1/2.

There are cases where we clearly should not risk performing a DD action. A clear example is when you’re hunting and you see a shape that has a 40% chance of being human: you should not shoot. But notice that in this case, deontology need play no role: expected-utility reasoning tells you that you shouldn’t shoot.

There are, on the other hand, cases where you should take a significant risk of performing a DD action.

Beast Case: The shape in the distance has a 30% chance of being human and a 70% chance of being a beast that is going to devour a dozen people in your village if not shot by you right now. In that case, it seems it might well be permissible to shoot.

This suggests this principle:

  1. If a DD action has significantly higher expected utility than refraining from the action, it is permissible to perform it.

But this is false. I will assume here the standard deontological claim that it is wrong to shoot one innocent to save two.

Villain Case: You are hunting and you see a dark shape in the woods. The shape has a 40% chance of being an innocent human and a 60% chance of being a log. A villain who is with you has just instructed a minion to go and check in a minute on the identity of the shape. If the shape turns out to be a human, the minion is to murder two innocents. You can’t kill the villain or the minion, as they have bulletproof jackets.

The expected utility of shooting is significantly higher than of refraining from the action. If you shoot, the expected lives lost are (0.4)(1)=0.4, and if you don’t shoot the expected lives lost are (0.4)(2)=0.8. So shooting has an expected utility that’s 0.4 lives better than not shooting. But it is also clear, assuming the deontological claim that it is wrong to kill one to save two, that it is wrong to shoot in this case.

What is different from the villain case and the dangerous beast case is that in the Villain Case, the difference in expected utilities comes precisely from the scenario where the shape is human. Intuition suggests we should tweak (1) to evaluate expected utilities in a way that ignores the good effects of deontologically forbidden things. This tweak does not affect the Beast Case, but it does affect the Villain Case, where the difference in utilities came precisely from counting the life-saving benefits of killing the human.

I don’t know how to precisely formulate the tweaked version of (1), and I don’t know if it is sufficiently strong to covere all cases.

Monday, February 8, 2021

Monday, January 25, 2021

Killing and letting die

  1. It is murder to disconnect a patient who can only survive with a ventilator without consent and in order to inherit from them.

  2. Every murder is a killing.

  3. So, it is a killing to disconnect a patient who can only survive with a ventilator without consent and in order to inherit from them.

  4. Whether an act is a killing does not depend on consent or intentions.

  5. So, it is a killing to disconnect a patient who can only survive with a ventilator.

Of course, whether such a disconnection is permissible or not is a further question, since not every killing is wrong (e.g., an accidental killing need not be wrong).

Thursday, May 21, 2020

More on Double Effect and statistical reasoning

Consider this case:

  1. You are fighting a just war. There are 1000 people facing you and you have very good, but fallible, reason to think of each that they are an unjust aggressor that you are permitted to kill. At the same time, on statistical grounds you know one of the thousand is innocent. You kill the thousand for standard military reasons.

This is justifiable, assuming nothing defeats the standard military reasons.

  1. You are fighting a just war. There are 1000 people facing you and you have very good, but fallible, reason to think of each that they are an unjust aggressor that you are permitted to kill. At the same time, on statistical grounds you know one of the thousand is innocent. Moreover, the enemy is superstitious and thinks the number 1000 is especially significant, so that if you kill 1000, they will instantly surrender.

Now this case is tricky. At first, it seems like it’s an easier case than (1). After all, you have two separate reasons: the usual military reasons for killing unjust aggressors and the fact that if you kill them all, the enemy will instantly surrender. But it’s trickier than that. The problem is that if you simply kill the thousand for the standard military reasons, then you can intend to kill each one qua aggressor—for you have good reason to think of each that they are an aggressor, even though you know you are mistaken about each one. But if you act on the enemy’s superstition, you are intending to kill each one simpliciter, not just qua aggressor, for all 1000 need to be dead for the plan to be fulfilled. In particular, the one who is innocent needs to be dead, too, in order for your plan to be fulfilled. But when you acted on the standard military reasons, you didn’t need the innocent one to be dead—as that one wasn’t a problem.

So, in case (2) you cannot legitimately act on the enemy’s superstition and reason: “I will kill these 1000 in order that there be 1000 dead, which will trigger surrender.” For then the success of your action plan depends on the death of the innocents among the 1000, and not just on the death of the guilty. (I am not worried here about the moral problem of exploiting the enemy’s superstition. If you are, you can modify the case.)

That doesn’t mean you can’t take that superstition into account in a way. For instance, while military motives might be primary, you might have a defeater for these motives, such as that the mission is really dangerous. But the fact that the mission would end the war could defeat the reasons coming from the danger. This would be a Kamm-style triple effect case. (A more difficult question: could the fact that the enemy will surrender, thereby saving much bloodshed, defeat the reason against the action coming from the death of the innocent? I suspect not, but it’s a tough question.)

The above case pushes me to the idea that killing is one of those acts that can only be permissibly be done for certain kinds of reasons.

Wednesday, January 15, 2020

Five views on killing the innocent

Here is a spectrum of views on killing the innocent to prevent worse evils:

  1. Consequentialism: It is permissible to kill the innocent to prevent worse evils.

  2. Threshold Deontology: It is permissible to kill the innocent to prevent much worse evils.

  3. Standard Double Effect: It is permissible to perform an action foreseen to result in the death of an innocent to prevent worse evils when the death of the innocent is not intended.

  4. Loose Carefulness: It is permissible to perform an action to prevent worse evils only when it is not likely that the action will result in the death of an innocent.

  5. Strict Carefulness: It is permissible to perform an action to prevent worse evils only when there is no chance that the action will result in the death of an innocent.

In this post, my interest is in 3-4. But note that adherents of 2 may still be interested in 3-5, because they still need to have something to say about the cases which fall below the threshold of “much worse evils”.

Clearly, Strict Carefulness is too strict. On Strict Carefulness, it is wrong to drive anywhere, and in particular it is wrong to send a fire truck to rescue people from a burning building. In fact, we all the time perform actions that have a small chance of resulting in the death of the innocent.

I want to argue that if we reject Strict Carefulness, we should reject Loose Carefulness as well. The reason is that one gets a violation of Loose Carefulness by accumulating violations of Strict Carefulness. For instance, if one in a billion people vaccinated against some a deadly disease dies, then an adherent of Strict Carefulness will refuse to vaccinate anybody against that disease. That’s excessive carefulness. We should vaccinate (assuming the disease is sufficiently nasty and the vaccination is sufficiently effective, etc.). Nor does it cease to become permissible to vaccinate as the number of people vaccinated becomes sufficiently high that it it becomes likely that someone will die. Once it is granted that I may vaccinate one, clearly I may vaccinate a billion. Now, strictly speaking, that is not a counterexample to Loose Carefulness. For although those billion vaccinations are likely to result in the death of an innocent, they are not a single action that is likely to result in the death of an innocent. But that is easily fixed. Just imagine that I run a mega-clinic staffed with robotic nurses. I press a single button and a billion patients are vaccinated. Surely if it would be permissible for me to vaccinate them all one-by-one, it is permissible for me to dispatch the army of robotic nurses (assuming that they are just as effective, that they are not unduly scary to children, that all the same precautions are taken, etc.)

In other words, the adherent of Loose Carefulness who rejects (as everyone clearly should) Strict Carefulness has to make a dubious permissibility distinction between performing a single cumulative high-risk action and performing lots of separate low-risk actions.

Perhaps, though, someone could modify Loose Carefulness and replace “the death of an innocent” with “the death of a specific innocent”. In vaccinating a billion people, it seems there isn’t a specific innocent who is likely to die. I don’t think this modification is tenable. There are two different stories (and various combinations) one could tell about how the hypothetical vaccine kills one in a billion people. First, it could be that the vaccine interacts with the body of each in such a way that each individually has a one in a billion chance of death. Second, it could be that there is an undetectable condition (perhaps genetic) that occurs in one out of a billion people such that someone with that condition is nearly certain to die upon being vaccinated. I think it makes little moral difference which story is true. But on the second story, there is a specific innocent who will die when one vaccinates the billion people. Or consider Parfit’s story of releasing a lethal molecule into New York City which will kill one randomly person. That’s not significantly different from releasing a molecule into New York City that will kill one specific person.

So, neither Loose nor Strict Carefulness are right. That means that the choice we face is between Consequentialism, Threshold Deontology and Standard Double Effect, unless some other good options are found.