Cleaning thermonuclear fire

exactly what would it not take to turn the world into one big fusion reaction, wiping it clean of life and making it a barren rock? Asking for a pal.

Graphic from the 1946 film, “One World Or None,” produced by the nationwide Committee on Atomic Suggestions, advocating for the importance of the international control of atomic power.

One might wonder whether that kind of question provided it self while I became reading the news headlines nowadays, plus one will be totally proper. However the explanation individuals typically ask this real question is in mention of the the story that researchers at Los Alamos thought there is a non-zero chance your Trinity test might ignite the environment throughout the very first wartime test.

The essential concept is a simple one: in the event that you heat up really light atoms (like hydrogen) to very high conditions, they’ll battle around like mad, therefore the chances that they’ll collide into both and undergo nuclear fusion become a great deal greater. If that occurs, they’ll release more energy. Imagine if the very first rush of an atomic bomb started fusion reactions floating around around it, say between your atoms of air or nitrogen, and people fusion responses generated enough power to start out more responses, and so forth, over the whole environment?

It’s difficult to say exactly how seriously it was taken. Its clear that at one point, Arthur Compton concerned about it, and that likewise, a few boffins developed persuasive thinking on effect that this couldn’t take place. James Conant, upon experiencing the searing temperature of the Trinity test, briefly reflected that possibly this rumored thing had, certainly, visited pass:

Then arrived a burst of white light that seemed to fill the sky and appeared to last for seconds. I’d expected a relatively fast and bright flash. The enormity associated with light and its own length quite stunned me personally. My instantaneous reaction was that something choose to go wrong which the thermal nuclear [sic] transformational of the atmosphere, as soon as talked about as possibility and jokingly referred to a few momemts early in the day, had actually taken place.

Which does at the very least tell us that some of those at the test remained joking about this, even around the previous couple of moments. Fermi reportedly took wagers on whether or not the bomb would destroy simply New Mexico or in fact the entire world, nonetheless it had been comprehended as being a laugh.1

The introduction associated with Konopinski, Marvin, and Teller paper of 1946. Filed under: “SCIENCE!“

Into the fall of 1946, Emil Konopinski, Cloyd Marvin, and Edward Teller (whom else?) composed up a paper describing why no detonation in the world ended up being likely to start an uncontrolled fusion response within the environment. It is really not clear to me whether this is often the logic they utilized prior towards the Trinity detonation, but it is most likely of the comparable character to it.2 In a nutshell, there clearly was only 1 fusion reaction in line with the constituents of the air which had any probability anyway (the nitrogen-nitrogen effect), and the scientists were able to show it was not to more likely to take place or spread. No matter if one makes assumptions your response had been much simpler to start than anyone thought it was probably be, it absolutely wasn’t likely to be suffered. The reaction would cool (through a selection of real mechanisms) faster than it might distribute.

That is all a typical section of Manhattan Project lore. But I suspect many who possess read with this prior to never have really see the Konopinski-Marvin-Teller paper to its end, in which they end for a less sure-of-themselves note:

There continues to be the remote possibility that several other less easy mode of burning may keep itself inside atmosphere.

Even if the effect is stopped within sphere of the few hundred meters radius, the resultant earth-shock while the radioactive contamination of the environment might become catastrophic for a world-wide scale.

One may conclude that the arguments of the paper allow it to be unreasonable to expect your N+N effect could propagate. An limitless propagation is even less likely. But the complexity associated with the argument and also the lack of satisfactory experimental foundations makes further work with the subject extremely desirable.

That’s not quite as protected as you might want, considering these boffins were in reality focusing on developing weapons plenty of that time period stronger than the Trinity unit.3

The relevant part of the Manhattan District History (cited below) interestingly links the study to the “Super” hydrogen bomb with the research into perhaps the atmosphere might be incinerated, which makes feeling, though it would be interesting to understand just how closely connected these questions in which.

There is an interesting section in the recently-declassified Manhattan District History‘s that covers the ignition for the environment issue. They repeat fundamentally the Konopinski-Marvin-Teller outcomes, and then conclude:

The impossibility of igniting the atmosphere was therefore guaranteed by science and good sense. The essential factors in these calculations, the Coulomb forces associated with nucleus, are one of the better comprehended phenomena of modern physics. The philosophic likelihood of destroying the planet earth, from the theoretical convertibility of mass into power, continues to be. The thermonuclear response, that is in order to now understood where such a catastrophe could take place, is evidently eliminated. The typical stability of matter in the observable universe contends against it. Further familiarity with the character regarding the great stellar explosions, novae and supernovae, will put light on these questions. Into the almost complete lack of genuine knowledge, it’s generally speaking believed your tremendous energy of those explosions is of gravitational rather than nuclear origin.4

Which again is at the same time reassuring and perhaps not reassuring. The footing which this knowledge was based ended up being… very good? But like good researchers these were happy, about in secret reports, to acknowledge there might actually be methods the earth become damaged through nuclear evaluating that they hadn’t considered. Intellectually honest, but in addition terrifying.

The ever relevant XKCD.

This dilemma arrived up once more prior to the process Crossroads nuclear tests in early 1946, that was to include a minumum of one underwater shot. None other than Nobel Prize-winning physicist Percy Bridgman stressed that detonating an atomic bomb under water might ignite a fusion effect into the water. Bridgman admitted their own ignorance into nuclear physics (his specialization had been high-pressure physics), but warned that:

Even the most readily useful human intellect has not imagination enough to envisage exactly what might take place once we push far into new territory. … To an outsider the tactics regarding the argument which will justify operating perhaps the slightest danger of that colossal catastrophe seems extremely weak.5

Bridgman’s worries weren’t actually your globe could be destroyed. He worried more that if the researchers showed up become cavalier about these specific things, therefore ended up being later on made public that their argument the safety associated with tests was according to flimsy evidence, that it would lead to a strong public backlash: “There might be a response against technology generally speaking which would end up in suppression of all clinical freedom while the destruction of technology itself.” Bridgman’s views were strong enough that they were forwarded to General Groves, but it isn’t clear whether or not they led to any significant modifications (though I wonder when they were the impetus for the write-up of this Konopinski-Marvin-Teller paper; the timing type of calculates, but we don’t understand).

There wasn’t lots of evidence this issue concerned the researchers an excessive amount of moving forward. They had other things on the head, like building thermonuclear weapons, and it quickly became clear that beginning a big fusion reaction by having a fission bomb is hard. Which can be, in its very own means, an answer to the initial concern: if starting a runaway fusion response on purpose is hard, and needs really particular forms of arrangements and factors to have working also on a (relatively) little scale, then beginning one in the entire atmosphere, will probably be impossible.

Operation Fishbowl, Shot Checkmate (1962) — the lowest yield weapon, but something about its perfect symmetry together with path regarding the rocket that put it into the air invokes the notion of a planet turning out to be a celebrity for me. Supply: Los Alamos Nationwide Laboratory.

Great — cross that one from the directory of possibilities. But it wouldn’t really be technology unless they also, in the course of time, re-framed issue: exactly what conditions could be needed whenever we were to turn the complete earth into a thermonuclear bomb? In 1975, a radiation physicist on University of Chicago, H.C. Dudley, published an article inside Bulletin of Atomic Scientists caution of the “ultimate catastrophe” of setting the environment burning. This received a few rebuttals and a lot of scorn, including one in pages associated with Bulletin by Hans Bethe, who’d formerly addressed this concern into the Bulletin in 1946. Interestingly, though, Dudley’s primary desire — that some body re-run these calculations on a modern computer simulation — did seem to generate research along these lines within Lawrence Livermore National Laboratory.6

In 1979, Livermore experts Thomas A. Weaver and Lowell Wood (the latter appropriately a well-known Edward Teller protege) published a paper on “Necessary conditions for the initiation and propagation of nuclear-detonation waves in airplane atmospheres,” which is a jargony way to ask issue into the name with this post. Here’s the abstract:

The basic conditions the initiation of a nuclear-detonation wave in an environment having airplane symmetry (age.g., a thin, layered fluid envelope for a earth or star) are developed. Two classes of these a detonation are identified: those where heat regarding the plasma resembles that the electromagnetic radiation permeating it, and the ones where the temperature associated with plasma is significantly greater. Necessary conditions are developed for the propagation of such detonation waves for an arbitrarily good distance. The contribution of fusion chain reactions to these processes is assessed. Through these factors, it’s shown that neither the environment nor oceans of this Earth may be designed to go through propagating nuclear detonation under any circumstances.7

Now if you simply browse the abstract, you might think it absolutely was merely another version (with fancier calculations) of this Konopinski-Marvin-Teller paper. And so they do rule out conclusively that N+N reactions would ever be energetic sufficient become self-propagating. But it is a lot more! Because unlike Konopinski-Marvin-Teller, it in fact centers around those “necessary conditions”: just what will have to be different, if you did want to have a self-propagating response?

The solution they found: if the Earth’s oceans had twenty times more deuterium than they actually contain, they could be ignited by a 20 million megaton bomb (which will be to say, a bomb utilizing the yield equivalent to 200 teratons of TNT, or perhaps a bomb 2 million times more powerful than the Tsar Bomba’s complete yield). If we assumed that this type of tool had even a fantastically efficient yield-to-weight ratio like 50 kt/kg, that’s still a tool that will weigh around a billion metric tons. To put that into viewpoint, that’s about ten times more mass than all the concrete of Three Gorges Dam.8

So there you have it — it can be done! You simply have to completely change the composition of the oceans and require a nuclear tool numerous instructions of magnitude more powerful than the gigaton bombs dreamed of by Edward Teller, then, perhaps, you’ll display the cleansing thermonuclear fire experience.

Which will be to express, this won’t be exactly how the planet dies. But don’t worry, there are plenty other plausible alternatives for human self-extinction around. They simply probably won’t be as quick.


I will be undergoing completing my book manuscript, that will be the actual work of the summer, therefore most other writing, including blog posting, is going for a back chair for a few months while We concentrate on that. The irreverent name of this post is obtained from a recurring theme into the Twitter feed of anthropology grad student Martin “Lick the Bomb” Pfeiffer, whose work you should have a look at when you yourself haven’t currently.

(The Impossibility of) Lighting Atmospheric Fire,” does a very good task of reviewing a few of the wartime conversations and clinical problems.

  • Emil Konopinski, Cloyd Marvin, and Edward Teller, “Ignition for the Atmosphere with Nuclear Bombs,” (14 August 1946), LA-602, Los Alamos National Laboratory. Konopinski and Teller additionally apparently composed an unpublished report about them in 1943. I have only seen mention of the it, as report LA-001 (suspiciously like the LA-1 that is the Los Alamos Primer), but have not seen it.
  • Teller, in October 1945, wrote the following to Enrico Fermi about the chance for a “Super” detonating the environment, as part of that which was basically a “Frequently expected Questions” in regards to the H-bomb: “Careful considerations and calculations have shown that there’s not the remotest likelihood of such an occasion [ignition associated with the atmosphere]. The concentration of energy encountered in super bomb just isn’t higher than that the atomic bomb. In my experience the risks were greater once the first atomic bomb had been tested, because our conclusions had been based in those days on longer extrapolations from understood facts. The chance associated with the super bomb doesn’t lie in physical nature however in human being behavior.” What I find most fascinating concerning this is his comment about Trinity, though Teller’s rhetorical point is an apparent one (overstate the Trinity uncertainty after the fact to be able to emphasize their certainty currently). Edward Teller to Enrico Fermi (31 October 1945), Harrison-Bundy data associated with the Development for the Atomic Bomb, 1942-1946, microfilm publication M1108 (Washington, D.C.: nationwide Archives and reports Administration, 1980), Roll 6, Target 5, Folder 76, “Interim Committee — Scientific Panel.”
  • Manhattan District History, Book 8, amount 2 (“Los Alamos – Technical”), paragraph 1.50.
  • Percy W. Bridgman to Hans Bethe, forwarded by Norris Bradbury to Leslie Groves via TWX (13 March 1946), copy in Nuclear Testing Archive, Las vegas, nevada, NV, document NV0128609.
  • H.C. Dudley, “The Ultimate Catastrophe,” Bulletin associated with Atomic researchers (November 1975), 21; Hans Bethe, “Can Air or liquid Be Exploded?,” Bulletin of the Atomic experts 1, no. 7 (15 March 1946), 2; Hans Bethe, “Ultimate Disaster?,” Bulletin of the Atomic researchers 32, no. 6 (1976), 36-37; Frank von Hippel, “Taxes Credulity (Letter towards the Editor),” Bulletin of Atomic boffins (January 1946), 2.
  • Thomas A. Weaver and Lowell Wood, “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” Physical Review A 20, no. 1 (1 July 1979), 316-328. DOI: https://doi.org/10.1103/PhysRevA.20.316.
  • Specifically, they conclude it might take a 2 x 107 Mt power release, which they call “fantastic,” to ignite an ocean of 1:300 (instead of the actual 1:6,000) concentration of deuterium. Being an apart, but the collision event that created the Chicxulub Crater (and killed the dinosaurs, etc.) is believed to own released around 5 x 1023 J, which means about 120 million megatons of TNT. So’s not a totally unreasonable energy release for the earth to encounter during the period of its presence — simply not from nuclear weapons.
  • A brief history of the nuclear triad

    Summers for me are paradoxically the time I can get work done, and the time in which I feel I have the most work. I’m not teaching, which in theory means I have much more unstructured time. The consequence, though, is that I have about a million projects I am trying to get done in what is still a limited amount of time, and I’m also trying to see family, friends, and get a little rest. I sort of took June off from blogging (which I felt was my due after the amount of exposure I got in April and May), but I have several posts “in the hopper,” and several other things coming out soon. Yesterday I gave a talk at the US Department of State as part of their Timbie Forum (what used to be called their Generation Prague conference). I was tasked with providing the historical background on the US nuclear “triad,” as part of a panel discussion of the future of the triad. This is subject-matter I’ve taught before, so I felt pretty comfortable with it, but I thought I would return to a few of my favorite sources and refresh my understanding. This post is something of a write-up of my notes — more than I could say in a 20-minute talk.

    There is a lot of buzzing about lately about the future of the United States’ “nuclear triad.” The triad is the strategic reliance on three specific delivery “platforms” for deterrence: manned-bombers (the B-2 and the B-52), long-range intercontinental ballistic missiles (ICBMs; specifically the Minuteman III), and submarine-launched ballistic missiles (SLBMs; specifically the Trident II missile carried by Ohio class submarines). Do we need all three “legs” of the triad? I don’t know — that’s a question for another day, and depends on how you balance the specific benefits and risks of each “leg” with the costs of maintaining or upgrading them. But as we think about the future of the US arsenal, looking at how the triad situation came about, and how people started talking about it as a “triad,” offers some interesting food for thought.

    The modern nuclear triad. Source: Nuclear Posture Review, 2010.

    The modern nuclear triad. Source: Nuclear Posture Review, 2010.

    The stated logic of the triad has long as such: 1) bombers are flexible in terms of their armaments and deployments (and have non-nuclear roles); 2) ICBM forces are kept far from the enemy, are highly-accurate, and thus make a first-strike attack require a huge amount of “investment” to contemplate; 3) SLBM forces are, for the near term, capable of being kept completely hidden from attack, and thus are a guaranteed “second strike” capability. The combination of these three factors, the logic goes, keeps anyone from thinking they could get away with a nuclear attack.

    That’s the rationale. It’s not the history of it, though. Like so many things, the history is rather wooly, full of stops-and-starts, and a spaghetti graph of different organizations, initiatives, committees, industrial contractors, and ideas. I have tried to summarize a lot of material below — with an idea to pointing out how each “leg” of the triad got (or did not get, depending on when) the support it needed to become a reality. I only take these histories up through about 1960, after which each of the three “legs” were deployed (and to try and go much further would result in an even-longer post).

    LEG 1: MANNED BOMBERS

    The United States’ first approach to the “delivery” question was manned, long-range bombers. Starting with the B-29, which delivered the first atomic bombs, and some 80 million pounds of incendiaries, over Japanese cities during World War II, the US was deeply committed to the use of aircraft as the means of getting the weapons from “here” to “there.” Arguably, this commitment was a bit overextended. Bureaucratic and human factors led to what might be called a US obsession with the bomber. The officers who rose through the ranks of the US Army Air Forces, and the newly-created (in 1947) US Air Force, were primarily bomber men. They came out of a culture that saw pilots as the ultimate embodiment of military prowess. There were some exceptions, but they were rare.

    The B-29's power was more than military — it became a symbol of a new form of warfare for the generals of the newly-constituted US Air Force. Source.

    The B-29’s power was more than military — it became a symbol of a new form of warfare for the generals of the newly-constituted US Air Force. Source.

    In their defense, the US had two major advantages over the Soviet Union with respect to bombers. The first is that the US had a lot more experience building them: the B-29 “Superfortress” was an impressive piece of machinery, capable of flying further, faster, and with a higher load of armaments than anything else in the world at the time, and it was just the beginning.

    The second was geography. The B-29 had a lot of range, but it wasn’t intercontinental. With a range of some 3,250 miles, it could go pretty far: from the Marianas to anywhere in Japan and back, for example. But it couldn’t fly a bomb-load to Moscow from the United States (not even from Alaska, which was only in range of the eastern half of Russia). This might not look like an advantage, but consider that this same isolation made it very hard for the Soviet Union to use bombers to threaten the United States in the near-term, and that the US had something that the USSR did not: lots of friends near its enemy’s borders.

    As early as late August 1945, the United States military planners were contemplating how they could use friendly airfields — some already under US control, some not — to put a ring around the Soviet Union, and to knock it out of commission if need be. In practice, it took several years for this to happen. Deployments of non-nuclear components of nuclear weapons abroad waited until 1948, during the Berlin Blockade, and the early stages of the Korean War.

    US nuclear bomber deployments, 1945-1958. One of my favorite slides that I use when teaching — it shows what "containment" comes to mean, and amply demonstrates the geopolitics of Cold War bomber bases.

    US nuclear bomber deployments, 1945-1958. One of my favorite slides that I use when teaching — it shows what “containment” comes to mean, and amply demonstrates the geopolitics of Cold War bomber bases. Shadings indicate allies/blocs circa 1958.

    In 1951, President Truman authorized small numbers of nuclear weapons (with fissile cores) to be deployed to Guam. But starting in 1954, American nuclear weapons began to be dispersed all-around the Soviet perimeter: French Morocco, Okinawa, and the United Kingdom in 1954; West Germany in 1955; Iwo Jima, Italy, and the Philippines in 1957; and France, Greenland, Spain, South Korea, Taiwan, and Tunisia in 1958. This was “containment” made real, all the more so as the USSR had no similar options in the Western Hemisphere until the Cuban Revolution. (And as my students always remark, this map puts the Cuban Missile Crisis into perspective.)1

    And if the B-29 had been impressive, later bombers were even more so. The B-36 held even more promise. Its development had started during World War II, and its ability to extend the United States’ nuclear reach was anticipated as early as 1945. It didn’t end up being deployed until 1948, but added over 700 miles to the range of US strategic forces, and could carry some 50,000 lbs more fuel and armament. The B-52 bomber, still in service, was ready for service by 1955, and extended the range of bombers by another several hundred miles, increased the maximum flight speed by more than 200 miles per hour.2

    PlaneFirst flightIntroduced in serviceCombat range (mi)Maximum speed (mph)Service ceiling (ft)Bomb weight (lbs)
    B-17193519382,00028735,6004,500
    B-2919421944 3,250357 31,850 20,000
    B-3619461948 3,985435 43,000 72,000
    B-5219521955 4,480650 50,000 70,000
    B-219891997 6,000630 50,000 40,000

    So you can see, in a sense, why the US Air Force was so focused on bombers. They worked, they held uniquely American advantages, and you could see how incremental improvement would make them fly faster, farther, and with more weight than before. But there were more than just technical considerations in mind: fascination with the bomber was also cultural. It was also about the implied role of skill and value of control in a human-driven weapon, and it was also about the idea of “brave men” who fly into the face of danger. The bomber pilot was still a “warrior” in the traditional sense, even if his steed was a complicated metal tube flying several miles above the Earth.

    LEG 2: LAND-BASED INTERCONTINENTAL BALLISTIC MISSILES (ICBMs)

    But it wasn’t just that the USAF was pro-bomber. They were distinctly anti-missile for a long time. Why? The late Thomas Hughes, in his history of Project Atlas, attributes a distinct “conservative momentum, or inertia” to the USAF’s approach to missiles. Long-range missiles would be disruptive to the hierarchy: engineers and scientists would be on top, with no role for pilots in sight. Officers would, in a sense, become de-skilled. And perhaps there was just something not very sporting about lobbing nukes at another country from the other side of the Earth.3

    But, to be fair, it wasn’t just the Air Force generals. The scientists of the mid-1940s were not enthusiastic, either. Vannevar Bush told Congress in 1945 that:

    There has been a great deal said about a 3,000 mile high-angle rocket. In my opinion such a thing is impossible and will be impossible for many years. The people who have been writing these things that annoy me have been talking about a 3,000 mile high-angle rocket shot from one continent to another carrying an atomic bomb, and so directed as to be a precise weapon which would land on a certain target such as this city. I say technically I don’t think anybody in the world knows how to do such a thing, and I feel confident it will not be done for a very long time to come.

    Small amounts of money had been doled out to long-range rocket research as early as 1946. The Germans, of course, had done a lot of pioneering work on medium-range missiles, and their experts were duly acquired and re-purposed as part of Operation Paperclip. The Air Force had some interest in missiles, though initially the ones they were more enthusiastic about were what we would call cruise missiles today: planes without pilots. Long-range ballistic missiles were very low on the priority list. As late as 1949 the National Security Council gave ballistic missiles no research priority going forward — bombers got all of it.

    Soviet testing of an R-1 (V-2 derivative) rocket at Kapustin Yar. Soviet rocket tests were detected by American radars — and spurred US interest in rockets. Source.

    Soviet testing of an R-1 (V-2 derivative) rocket at Kapustin Yar. Soviet rocket tests were detected by American radars — and spurred US interest in rockets. Source.

    Real interest in ballistic missiles did not begin until 1950, when intelligence reports gave indication of Soviet interest in the area. Even then, the US Air Force was slow to move — they wanted big results with small investment. And the thing is, rocket science is (still) “rocket science”: it’s very hard, all the more so when it’s never been really done before.

    As for the Soviets: while the Soviet Union did not entirely forego research into bombers, the same geographic factors as before encouraged them to look into long-range rockets much earlier than the United States. For the USSR to threaten the USA with bombers would require developing very long-range bombers (because they lacked the ability to put bases on the US perimeter), and contending with the possibility of US early-warning systems and interceptor aircraft. If they could “skip” that phase of things, and jump right to ICBMs, all the better for them. Consequently, Stalin had made missile development a top priority as early as 1946.

    It wasn’t until the development of the hydrogen bomb that things started to really change in the United States. With yields in the megaton range, suddenly it didn’t seem to matter as much if you couldn’t get the accuracy that high. You can miss by a lot with a megaton and still destroy a given target. Two American scientists played a big role here in shifting the Air Force’s attitude: Edward Teller and John von Neumann. Both were hawks, both were H-bomb aficionados, and both commanded immense respect from the top Air Force brass. (Unlike, say, J. Robert Oppenheimer, who was pushing instead for tactical weapons that could be wielded by the — gasp — Army.)

    Ivy Mike, November 1952. Accuracy becomes less of a problem.

    Ivy Mike, November 1952. Accuracy becomes less of a problem.

    Teller and von Neumann told the Air Force science board that the time had come to start thinking about long-range missiles — that in the near term, you could fit a 1-2 megatons of explosive power into a 1-ton warhead. This was still pretty ambitious. The US had only just tested its first warhead prototype, Ivy Mike, which was an 80-ton experiment. They had some other designs on the books, but even the smaller weapons tested as part of Operation Castle in 1954 were multi-ton. But it was now very imaginable that further warhead progress would make up that difference. (And, indeed, by 1958 the W49 warhead managed to squeeze 1.44 Mt of blast power into under 1-ton of weight — a yield-to-weight ratio of 1.9 kt/kg.)

    The USAF set up an advisory board, headed by von Neumann, with Teller, Hans Bethe, Norris Bradbury, and Herbert York on it. The von Neumann committee concluded that long-range missile development needed to be given higher priority in 1953. Finally, the Department of Defense initiated a full-scale ICBM program — Project Atlas — in 1954.

    Even this apparent breakthrough of bureaucratic inertia took some time to really get under way. You can’t just call up a new weapons system from nothing by sheer will alone. As Hughes explains, there were severe doubts about how one might organize such a work. The first instinct of the military was to just order it up the way they would order up a new plane model. But the amount of revolutionary work was too great, and the scientists and advisors running the effort really feared that if you went to a big airplane company like Convair and said, “make me a rocket,” the odds that they’d actually be able to make it work were low. They also didn’t want to assign it to some new laboratory run by the government, which they felt would be unlikely to be able to handle the large-scale production issues. Instead, they sought a different approach: contract out individual “systems” of the missile (guidance, fuel, etc.), and have an overall contractor manage all of the systems. This took some serious effort to get the DOD and Air Force to accept, but in the end they went with it.

    Launch sequence of an Atlas-D ICBM, 1960. Source.

    Launch sequence of an Atlas-D ICBM, 1960. Source.

    Even then things were pretty slow until mid-1954, when Congressional prodding (after they were told that there were serious indications the Soviets were ahead in this area) finally resulted in Atlas given total overriding defense priority. Even then the people in charge of it had to find ways to shortcut around the massive bureaucracy that had grown up around the USAF and DOD contracting policies. In Hughes’ telling of Atlas, it is kind of amazing that it gone done at rapidly as it did — it seems that there were near-endless internal obstacles to get past.  The main problem, one Air Force historian opined, was not technical: “The hurdle which had to be annihilated in correcting this misunderstanding was not a sound barrier, or a thermal barrier, but rather a mental barrier, which is really the only type that man is ever confronted with anyway.” According to one estimate, the various long-term cultural foot-dragging about ballistic missiles in the United States delayed the country from acquiring the technology for six years. Which puts Sputnik into perspective.

    The US would start several different ballistic missile programs in the 1950s:

    Rocket familyDesign startedRoleMilitary patronPrime industrial contractorWarhead yield
    Redstone1950IRBMUS ArmyChrysler0.5-3.5 Mt
    Atlas1953ICBMUSAFConvair1.44 Mt
    Thor1954IRBMUSAFDouglas1.4 Mt
    Titan1955ICBMUSAFGlenn Martin3.75 Mt
    Polaris1956SLBMUSNLockheed0.6 Mt
    Minuteman1957ICBMUSAFBoeing1.2 Mt

    As you can see, there’s some redundancy there. It was deliberate: Titan, for example, was a backup to Atlas in case it didn’t work out. There’s also some interesting stuff going on with regards to other services (Army, Navy) not wanting to be “left out.” More on that in a moment. Minuteman, notably, was based on solid fuel, not liquid, giving it different strategic characteristics, and a late addition. The Thor and Redstone projects were for intermediate-range ballistic missiles (IRBMs), not ICBMs — they were missiles you’d have to station closer to the enemy than the continental United States (e.g., the famous Jupiter missiles kept in Turkey).

    The redundancy was a hedge: the goal was to pick the top two of the programs and cancel the rest. Instead, Sputnik happened. In the resulting political environment, Eisenhower felt he had to put into production and deployment all six of them — even though some were demonstrably not as technically sound as others (Thor and Polaris, in their first incarnations, were fraught with major technical problems). This feeling that he was pushed by the times (and by Congress, and the services, and so on) towards an increasingly foolish level of weapons production is part of what is reflected in Eisenhower’s famous 1961 warning about the powerful force of the “military-industrial complex.”4

    LEG 3: SUBMARINE-LAUNCHED BALLISTIC MISSILES (SLBMs)

    Polaris is a special and interesting case, because it’s the only one in that list that is legitimately a different form of delivery. Shooting a ballistic missile is hard enough; shooting one from a submarine platform was understandably more so. Today the rationale of the SLBM seems rather obvious: submarines have great mobility, can remain hidden underwater even at time of launch, and in principle seem practically “invulnerable” — the ultimate “second strike” guarantee. At the time they were proposed, though, they were anything but an obvious approach: the technical capabilities just weren’t there. As already discussed at length, even ICBMs were seen with a jaundiced eye by the Air Force in the 1950s. Putting what was essentially an ICBM on a boat wasn’t going to be something the Air force was going to get behind. Graham Spinardi’s From Polaris to Trident is an excellent, balanced discussion the technical and social forces that led to the SLBM becoming a key leg of the “triad.”5

    The USS Tunny launches a cruise missile (Regulus) circa 1956. Source.

    The USS Tunny launches a cruise missile (Regulus) circa 1956. Source.

    The Navy had in fact been interested in missile technology since the end of World War II, getting involved in the exploitation of German V-2 technology by launching one from an aircraft carrier in 1947. But they were also shy of spending huge funds on untested, unproven technology. Like the Air Force, they were initially more interested in cruise than ballistic missiles. Pilotless aircraft didn’t seem too different from piloted aircraft, and the idea of carrying highly-volatile liquid fueled missiles made Navy captains squirm. The Regulus missile (research started in 1948, and fielded in 1955) was the sort of thing they were willing to look at: a nuclear-armed cruise missile that could be launched from a boat, with a range of 575 miles. They were also very interested in specifically-naval weapons, like nuclear-tipped torpedoes and depth charges.

    What changed? As with the USAF, 1954 proved a pivotal year, after the development of the H-bomb, the von Neumann committee’s recommendations, and fears of Soviet work combined with a few other technical changes (e.g., improvements in solid-fueled missiles, which reduced the fear of onboard explosions and fires). The same committees that ended up accelerating American ICBM work similarly ended up promoting Naval SLBM work as well, as the few SLBM advocates in the Navy were able to use them to make a run-around of the traditional authority. At one point, a top admiral cancelled the entire program, but only after another part of the Navy had sent around solicitations to aerospace companies and laboratories for comment, and the comments proved enthusiastic-enough that they cancelled the cancellation.

    As with the ICBM, there was continued opposition from top brass about developing this new weapon. The technological risks were high: it would take a lot of money and effort to see if it worked, and if it didn’t, you couldn’t get that investment back. What drove them to finally push for it was a perception of being left out. The Eisenhower administration decided in 1955 that only four major ballistic missile programs would be funded: Atlas, Titan, Thor, and Redstone. The Navy would require partnering up with either the USAF or US Army if it wanted any part of that pie. The USAF had no need of it (and rejected an idea for a ship-based Thor missile), but the Army was willing to play ball. The initial plan was to develop a ship-based Jupiter missile (part of the Redstone missile family), with the original schedule was to have one that could be fielded by 1965.

    But the Navy quickly was dissatisfied with Jupiter’s adaptability to sea. It would have to be shrunk dramatically to fit onto a submarine, and the liquid-fuel raised huge safety concerns. They quickly started modifying the requirements, producing a smaller, solid-fueled intermediate-range missile. They were able to convince the Army that this was a “back-up” to the original Jupiter program, so it would technically not look like a new ballistic missile program. Even so, it was an awkward fit: even the modified Jupiter’s were too large and bulky for the Navy’s plans.

    What led to an entirely new direction was a fortuitous meeting between a top naval scientist and Edward Teller (who else?), at a conference on anti-submarine warfare in the summer of 1956. At the conference, Teller suggested that trends in warhead technology meant that by the early 1960s the United States would be able to field megaton-range weapons inside a physics package that could fit into small, ship-based missiles. Other weapons scientists regarded this as possibly dangerous over-hyping and over-selling of the technology, but the Navy was convinced that they could probably get within the right neighborhood of yield-to-weight ratios. By the fall of 1956, the Navy had approved a plan to create their own ballistic missile with an entirely different envelope and guidance system than Jupiter, and so Polaris was born.

    Artist's conception of a Polaris missile launch. Source.

    Artist’s conception of a Polaris missile launch. Source.

    The first generation of Polaris (A-1) didn’t quite meet the goals articulated in 1956, but it got close. Instead of a megaton, it was 600 kilotons. Instead of 1,500 mile range, it was 1,200. These differences matter, strategically: there was really only one place it could be (off the coast of Norway) if it wanted to hit any of the big Soviet cities. And entirely separately, the first generation of Polaris warheads were, to put it mildly, a flop. They used an awful lot of fissile material, and there were fears of criticality accidents in the event of an accidental detonation. No problem, said the weapons designers: they’d put a neutron-absorbing strip of cadmium tape in the core of the warhead, so that if the high explosives were ever to detonate, no chain reaction would be possible. Right before any intended use, a motor would withdraw the tape. Sounds good, right? Except in 1963, it was discovered that the tape corroded while inside the cores. It was estimated that 75% of the warheads would not have detonated: the mechanism would have snapped the tape, which would then have been stuck inside the warhead. There was, as Eric Schlosser, in Command and Control, quotes a Navy officer concluding that they had “almost zero confidence that the warhead would work as intended.” They all had to be replaced.6

    The first generation of Polaris missiles, fielded in 1960, were inaccurate and short-ranged (separate from the fact that the warheads wouldn’t have worked). This relegated them to a funny strategic position. They could only be used as a counter-value secondary-strike: they didn’t have the accuracy necessary to destroy hardened targets, and many of those were more centrally-located in the USSR.

    WHEN AND WHY DO WE TALK ABOUT A TRIAD?

    The “triad” was fielded starting in the 1960s. But there was little discussion of it as a “triad” per se: it was a collection of different weapon systems. Indeed, deciding that the US strategic forces were really concentrated into just three forces is a bit of an arbitrary notion, especially during the Cold War but even today. Where do foreign-based IRBMs fit into the “triad” concept? What about strategic weapons that can be carried on planes smaller than heavy bombers? What about the deterrence roles of tactical weapons, the nuclear artillery shells, torpedoes, and the itty-bitty bombs? And, importantly, what about the cruise missiles, which have developed into weapons that can be deployed from multiple platforms?

    Nuclear Triad Google Ngram

    Relative word frequency for “nuclear triad” as measured across the Google Books corpus. Source.

     

    It’s become a bit cliché in history circles to pull up Google Ngrams whenever we want to talk about a concept, the professorial equivalent of the undergraduate’s introductory paragraph quoting from the dictionary. But it’s a useful tool for thinking about when various concepts “took hold” and their relative “currency” over time. What is interesting in the above graph is that the “triad” language seems to surface primarily in the 1970s, gets huge boosts in the late Cold War, and then slowly dips after the end of the Cold War, into the 21st century.

    Which is to say: the language of the “triad” comes well after the various weapon systems have been deployed. It is not the “logic” of why they made the weapons systems in the first place, but a retrospective understanding of their strategic roles. Which is no scandal: it can take time to see the value of various technologies, to understand how they affect things like strategic stability.

    But what’s the context of this talk about the triad? If you go into the Google Books entries that power the graph, they are language along the lines of: “we rely on the triad,” “we need the triad,” “we are kept safe by the triad,” and so on. This sort of assertive language is a defense: you don’t need to sing the praises of your weapons unless someone is doubting their utility. The invocation of the “triad” as a unitary strategic concept seems to have come about when people started to wonder whether we actually needed three major delivery systems for strategic weapons.

    A strange elaboration of the triad notion from the Defense Logistics Agency, in which the "new triad" includes the "old triad" squished into one "leg," with the other "legs" being even less tangible notions joined by a web of command and control. At this point, I'd argue it might be worth ditching the triad metaphor. Source.

    A strange elaboration of the triad notion from the Defense Logistics Agency, in which the “new triad” includes the “old triad” squished into one “leg,” with the other “legs” being even less tangible notions joined by a web of command and control. At this point, I’d argue it might be worth ditching the triad metaphor. Source.

    When you give something abstract a name, you aid in the process of reification, making it seem tangible, real, un-abstract. The notion of the “triad” is a concept, a unifying logic of three different technologies, one that asserts quite explicitly that you need all three of them. This isn’t to say that this is done in bad faith, but it’s a rhetorical move nonetheless. What I find interesting about the “triad” concept — and what it leaves out — is that it is ostensibly focused on technologies and strategies, but it seems non-coincidentally to be primarily concerning itself with infrastructure. The triad technologies each require heavy investments in bases, in personnel, in jobs. They aren’t weapons so much as they they are organizations that maintain weapons. Which is probably why you have to defend them: they are expensive.

    I don’t personally take a strong stance on whether we need to have ICBMs and bombers and SLBMs — there are very intricate arguments about how these function with regards to the strategic logic of deterrence, whether they provide the value relative to their costs and risks, and so on, that I’m not that interested in getting into the weeds over. But the history interests me for a lot of reasons: it is about how we mobilize concepts (imposing a “self-evident” rationality well after the fact), and it is also about how something that in retrospect seems so obvious to many (the development of missiles, etc.) can seem so un-obvious at the time.

    Notes
    1. The list of these deployments comes from the appendices in History of the Custody and Deployment of Nuclear Weapons, July 1945 through September 1977 (8MB PDF here), prepared by the Office of the Assistant to the Secretary of Defense (Atomic Energy), in February 1978, and Robert S. Norris, William Arkin, and William Burr, “Where They Were,” Bulletin of the Atomic Scientists (November/December 1999), 27-35, with a follow-up post on the National Security Archive’s website.
    2. All of the quantitative data on these bombers was taken from their Wikipedia pages. In places where there were ranges, I tried to pick the most representative/likely numbers. I am not an airplane buff, but I am aware this is the sort of thing that gets debated endlessly on the Internet!
    3. Thomas Hughes, Rescuing Prometheus: Four Monumental Projects That Changed the Modern World (New York : Pantheon Books, 1998), chapter 3, “Managing a Military Industrial Complex: Atlas,” 69-139.
    4. Eric Schlosser’s Command and Control has an excellent discussion of the politics of developing the early missile forces.
    5. Graham Spinardi, From Polaris to Trident: The Development of US Fleet Ballistic Missile Technology (Cambridge University Press, 1994).
    6. Spinardi, as an aside, gives a nice account of how they eventually achieved the desired yield-to-weight ratio in the W-47: the big “innovation” was to just use high-enriched uranium as the casing of the secondary, instead of unenriched uranium. As he notes, this was the kind of thing that was obvious in retrospect, but wasn’t obvious at the time — it required a different mindset (one much more willing to “expend” fissile material!) than the weapons designers of the early 1950s were used to.