Yeah, I know it’s Thursday — GET OFF MY CASE.

My colleague at the Monterey Institute, Ward Wilson, has a new book out entitled Five Myths About Nuclear Weapons. Ward and I disagree about some things, but I am always up for an argument about the epistemology of nuclear deterrence and  manual approaches to elephant exclusion. (I hear snapping fingers has a strong empirical record.)

So, I’ve invited Ward to submit a weekly post looking at a number case studies.  And because he is a W.W., I figured I would post them on Wednesday.  UNLESS I DON’T GET AROUND TO IT.  GET OFF MY CASE.

If it helps him sell a few books, all the better.  Here’s the first one, after the jump.

Doubts about Nuclear Deterrence, Part I

by Ward Wilson 

     I’ve been writing a book that the publisher talked me into naming Five Myths about Nuclear Weapons. “Five” because book titles with numbers sell. “Myths” because people love conspiracy, secrets, and mysterious knowledge. (“Click here to learn one old-fashioned trick to lose thirty pounds!”) I wanted to call it Some Pretty Reasonable Doubts Based on Historical Reinterpretations but apparently the marketing department said no. Go figure.

As part of the research for the book I went back and reexamined a bunch of the Cold War crises that involved nuclear weapons. I wanted to double check what everyone basically knows: nuclear deterrence works really well because nuclear weapons are really scary. You can count on people to be more sensible if nuclear weapons are in the mix, even in a crisis when emotions are running high. I didn’t expect to find anything, I just wanted to look. What I found was pretty disturbing. It seems as if nuclear deterrence failed regularly during Cold War crises. None of those failures led to catastrophic nuclear war (obviously), but there were a number of places where deterrence theory would have predicted that leaders would have backed off, and instead they took risky, aggressive actions that made the crisis worse. What I found seems to be a pretty serious blow to the idea that nuclear deterrence works reliably and robustly.

Nuclear deterrence is different from ordinary deterrence. Ordinary deterrence is, say, keeping little Jimmy from taking cookies by warning him he’ll get spanked. Or beheading adulterers. Or even using conventional military forces to send warnings about what the consequences of aggressive action will be. All of these would be ordinary deterrence. Nuclear deterrence is using the threat of nuclear retaliation to warn someone not to take aggressive action. The assumption has always been that ordinary deterrence works some of the time. Maybe–I don’t know–sixty percent. (I made that up. There’s debate about this. Most people obey the law, for example, not because they’re deterred but because they’ve developed the habit of obeying the law. How large and effective a role deterrence actually plays is uncertain. But there’s general agreement that ordinary deterrence works some of the time.)

Nuclear deterrence, on the other hand, is assumed to work much closer to 100 percent of the time. Because the consequences of nuclear war would be mind bogglingly horrible, people assume that nuclear deterrence is much, much more effective than ordinary deterrence. Nearly or even absolutely perfect. Which is a really good thing. But what if nuclear deterrence is only about as effective as ordinary deterrence? What if nuclear deterrence fails thirty or forty percent of the time? Human beings are not terribly rational creatures. It would make sense if people failed to be afraid–even though any reasonable person would–about forty percent of the time. Since any failure of nuclear deterrence runs the risk of escalating to catastrophic nuclear war, it’s pretty serious news if there are obvious failures of nuclear deterrence in the historical record.

The bad news? That’s what I found.

“But Ward,” I hear you saying, “there’s forty years of IR analysis and historical accounts that say you’re wrong.” Which is true. Most writers on this subject paint a pretty rosy picture about nuclear deterrence. In almost all of the serious, scholarly writing about these crises, nuclear deterrence (reassuringly) works almost every time. But this sort of group consensus can be wrong. Listen to what Ludwig Wittgenstein and Richard Ned Lebow have to say about this. Wittgenstein (the famously difficult-to-follow Austrian philosopher) pointed out that our feelings often get in the way of clear understanding:

What makes a subject hard to understand—if it’s something significant and important—is not that before you can understand it you need to be specially trained in abstruse matters, but the contrast between understanding the subject and what most people want to see.

Lebow makes something of the same point, but talking about the effect of a strong theory on observational objectivity:

Philosophers of science have observed that scientists tend to fit data into existing frameworks even if the framework does not do justice to the facts. Investigators deduce evidence in support of theory. Theory, once accepted, determines to which facts they pay attention. According to Thomas Kuhn, the several fields of science are each dominated by a “paradigm,” an accepted body of related concepts that establishes the framework for research. The paradigm determines the phenomena that are important and what kinds of explanations “make sense.” It also dictates the kinds of facts and explanations that are to be ignored because they are outside the paradigm or not relevant to the problem the paradigm has defined as important. Paradigms condition investigators to reject evidence that contradicts their expectations, to misunderstand it, to twist its meaning to make it consistent, to explain it away, to deny it, or simply to ignore it.

In our case we have both a strong theory (nuclear deterrence) and really strong emotions (fear of nuclear war) pushing us to focus only on the successes of nuclear deterrence. Most people really hope that nuclear deterrence is one hundred percent effective. As a result, objectivity was in pretty short supply during most of the Cold War.

What I found in going back over the evidence is that good news about nuclear deterrence is repeated (and even exaggerated) in the literature. Bad news–potential failures–are largely ignored. You may disagree that the events I’m going to review are nuclear deterrence failures. History is always open to interpretation. But I don’t think there’s any doubt that these potential failures have been largely overlooked in the literature. Not completely, but enough so that they hardly stand as well-known and oft-discussed landmarks in the debate.

Think about it this way: when a plane crashes and maybe a couple of hundred people die, the FAA (here in the United States) does a painstaking review of what went wrong. Months or even years of effort and millions of dollars are spent on reconstructing exactly what happened. Sometimes they even reconstruct the plane from all the scattered little bits that fell out of the sky. One of the reasons why flying is so safe is because of this tenacious attention to failure. The opposite appears to be true in nuclear weapons scholarship. What appear to be clear failures (to me) of nuclear deterrence have been consistently ignored. Shouldn’t potential failures of nuclear deterrence (where millions of lives are at stake) receive at least as much careful, cautious, objective attention as we give airplane crashes?

Jeffrey Lewis has kindly invited me to write a series of posts about failures of nuclear deterrence during various Cold War Crises (and one non-Cold War crisis.) I’m going to be writing about the Cuban Missile Crisis, the Middle East war of 1973, the Falkland Islands War, the Gulf War and then wind up with some general thoughts about nuclear deterrence. Each of these events points to remarkable failures of nuclear deterrence. Taken together they raise the question whether nuclear deterrence is reliable and robust.