Showing posts with label deterrence. Show all posts
Showing posts with label deterrence. Show all posts

Wednesday, September 3, 2025

Nuclear deterrence, part II: False threats

In my previous post, I considered the argument against nuclear deterrence that says it’s wrong to gain a disposition to do something wrong, and a disposition to engage in nuclear retaliation is a disposition to dosmething wrong. I concluded that the argument is weaker than it seems.

Here I want to think about another argument againt nuclear deterrence under somewhat different assumptions. In the previous post, the assumption was that the leader is disposed to retaliate, but expects not to have to (because the deterrence is expected to work). But what about a case where the leader is not disposed to retaliate, but credibly threatens retaliation, while planning not to carry through the threat should the enemy attack?

It would be wrong, I take it, for the leader to promise to retaliate in such a case—that would be making a false promise. But threatening is not promising. Suppose that a clearly unarmed thief has grabbed your phone is about to run. (Their bathing suit makes it clear they are unarmed.) You pick up a realistic fake pistol, point it at the them, and yell: “Drop the phone!” This does not seem clearly morally wrong. And it doesn’t seem to necessarily become morally wrong (absent positive law against it) when the pistol is real as long as you have no intention or risk of firing (it is, of course, morally wrong to use lethal force merely to recover your property—though it apparently can be legal in Texas). The threat is a deception but not a lie. For, first, note, that you’re not even trying to get the thief to believe you will shoot them—just to scare them (fear requires a lot less than belief). Second, if the thief keeps on running and you don’t fire, the thief would not be right to feel betrayed by your words.

So, perhaps, it is permissible to threaten to do something that you don’t intend to do.

Still, there is a problem. For it seems that in threatening to something wrong, you are intentionally gaining a bad reputation, by making it appear like you are a wicked person who would shoot an unarmed thief or a wicked leader who would retaliate with an all-out nuclear strike. And maybe you have a duty not to intentionally gain a bad reputation.

Maybe you do have such a duty. But it is not clear to me that the leader who threatens nuclear retaliation or the person who pulls the fake or real pistol on the unarmed thief is intentionally gaining a bad reputation. For the action to work, it just has to create sufficient fear that one will carry out the threat, and that fear does not require one to think that the threatener would carry out the threat—a moderate epistemic probability might suffice.

Nuclear deterrence, part I: Dispositions to do something wrong

I take it for granted that all-out nuclear retaliation is morally wrong. Is it wrong (for a leader, say) to gain a disposition to engage in all-out nuclear retaliation conditionally on the enemy performing a first strike if it is morally certain that having that disposition will lead the enemy not to perform a first strike, and hence the disposition will not be actualized?

I used to think the answer was “Yes”, because we shouldn’t come to be disposed to do something wrong in some circumstance.

But I now think it’s a bit more complicated. Suppose you are stuck in a maze full of lethal dangers, with all sorts of things that require split-second decisions. You have headphones that connect you to someone you are morally certain is a benevolent expert. If you blindly follow the expert’s directions—“Now, quickly, fire your gun to the left, and then grab the rope and swing over the precipice”—you will survive. But if you think about the directions, chances are you won’t move fast enough. You can instill in yourself a disposition to blindly do whatever the expert says, and then escape. And this seems the right thing to do, even a duty if you owe it to your family to escape.

Notice, however, that such a disposition is a disposition to do something wrong in some circumstance. Once you are in blind-following mode, if the expert says “Shoot the innocent person to the right”, you will do so. But you are morally certain the expert is benevolent and hence won’t tell you anything like that. Thus it can be morally permissible to gain a disposition which disposes you to do things that are wrong under circumstances that you are morally certain will not come up.

Perhaps, though, there seems to be a difference between this and my nuclear deterrence case. In the nuclear deterrence case, the leader specifically acquires a disposition to do something that is wrong, namely to all-out retaliate, and this disposition is always wrong to actualize. In the maze case, you gain a general disposition to obey the expert, and normally that disposition is not wrong to actualize.

But this overstates what is true of the nuclear deterrence case. There are some conditions under which all-out retaliation is permissible, such as when 99.9% of one’s nuclear arsenal has been destroyed and the remainder is only aimed at legitimate military targets, or maybe when all the enemy civilians are in highly effective nuclear shelters and retaliation is the only way to prevent a follow-up strike from the enemy. Moreover, it may understate what is permissible in the expert case. You may need to instill in yourself the specific willingness to do what at the moment seems wrong, because sometimes the expert may tell you things that will seem wrong—e.g., to swing your sword at what looks like a small child (but in fact is a killer robot). I am not completely sure it is permissible to have an attitude of trust in the expert that goes that far, but I could be convinced of it.

I was assuming, contrary to fact in typical cases, that there is moral certainty that the nuclear deterrence will be effective and there will be no enemy first strike. Absent that assumption, the question is rather less clear. Suppose there is a 10% chance the expert is not so benevolent. Is it permissible to instill a disposition to blindly follow their orders? I am not sure.

Wednesday, October 9, 2024

Proportionality and deterrence

There are many contexts where a necessary condition of the permissibility of a course of action is a kind of proportionality between the goods and bads resulting from the course of action. (If utilitarianism is true, then given a utilitarian understanding of the proportionality, it’s not only necessary but sufficient for permissibility.) Two examples:

  • The Principle of Double Effect says it is permissible to do things that are foreseen to have a basic evil as an effect, if that evil is not intended, and if proportionality between the evil effect and the good effects holds.

  • The conditions for entry into a just war typically include both a justice condition and a proportionality condition (sometimes split into two conditions, one about likely consequences of the war and the other about the probability of victory).

But here is an interesting and difficult kind of scenario. Before giving a general formulation, consider the example that made me think about this. Country A has a bellicose neighbor B. However, B’s regime while bellicose is not sufficiently evil that on a straightforward reading of proportionality it would be worthwhile for A to fight back if invaded. Sure, one would lose sovereignty by not fighting back, but B’s track record suggests that the individual citizens of A would maintain the freedoms that matter most (maybe this is what it would be like to be taken over by Alexander the Great or Napoleon—I don’t know enough of history to know), while a war would obviously be very bloody. However, suppose that a policy of not fighting back would likely result in an instant invasion, while a policy of fighting back would have a high probability of resulting in peace for the foreseeable future. We can then imagine that the benefits of likely avoiding even a non-violent takeover by B outweigh the small risk that despite A’s having a policy of armed resistance B would still invade.

The general case is this: We have a policy that is likely to prevent an unhappy situation, but following through on the policy violates a straightforward reading of proportionality if the unhappy situation eventuates.

One solution is to take into account the value of follwing through on the policy with respect to one’s credibility in the future. But in some cases this will be a doubtful justification. Consider a policy of fighting back against an invader—at least initially—even if there is no chance of victory. There are surely many cases of bellicose countries that could successfully take over a neighbor, but judge that the costs of doing so are too high given the expected resistance. But if the neighbor has such a policy, then in case the invasion nonetheless eventuates, whatever is done, sovereignty will be lost, and the policy will be irrelevant in the future. (One might have some speculation about the benefits for other countries of following through on the policy, but that’s very speculative.)

One line of thought on these kinds of cases is that we need to forego such policies, despite their benefits. One can’t permissibly act on them, so one can’t have them, and that’s that. This is unsatisfying, but I think there is a serious chance that this is right.

One might think that the best of both worlds is to make it seem like one has the policy, but not in fact have it. A problem with this is that it might involve lying, and I think lying is wrong. But even aside from that, in some cases this may not be practicable. Imagine training an army to defend one’s country, and then having a secret plan, known only to a very small number of top commanders, that one will surrender at the first moment of an invasion. Can one really count on that surrender? The deterrent policy is more effective the fiercer and more patriotic the army, but those factors are precisely likely to make them fight despite the surrender at the top.

Another move is this. Perhaps proportionality itself takes into account not just the straightforward computation of costs and benefits, but also the value of remaining steadfast in reasonably adopted policies. I find this somewhat attractive, but this approach has to have limits, and I don’t know where to draw them. Suppose one has invented a weapon which will kill every human being in enemy territory. Use of this weapon, with a Double Effect style intention of killing only the enemy soldiers, is clearly unjustified no matter what policies one might have, but a policy to use this weapon might be a nearly perfect protection against invasion. (Obviously this connects with the question of nuclear deterrence.) I suppose what one needs to say is that the importance of steadfastness in policies affects how proportionality evaluation go, but should not be decisive.

I find myself pulled to the strict view that policies we should not have policies acting on which would violate a straightforward reading of proportionality, and the view that we should abandon the straightforward reading of proportionality and take into account—to a degree that is difficult to weigh—the value of following policies.