Cleaning thermonuclear fire

exactly what would it not take to turn the world into one big fusion reaction, wiping it clean of life and making it a barren rock? Asking for a pal.

Graphic from the 1946 film, “One World Or None,” produced by the nationwide Committee on Atomic Suggestions, advocating for the importance of the international control of atomic power.

One might wonder whether that kind of question provided it self while I became reading the news headlines nowadays, plus one will be totally proper. However the explanation individuals typically ask this real question is in mention of the the story that researchers at Los Alamos thought there is a non-zero chance your Trinity test might ignite the environment throughout the very first wartime test.

The essential concept is a simple one: in the event that you heat up really light atoms (like hydrogen) to very high conditions, they’ll battle around like mad, therefore the chances that they’ll collide into both and undergo nuclear fusion become a great deal greater. If that occurs, they’ll release more energy. Imagine if the very first rush of an atomic bomb started fusion reactions floating around around it, say between your atoms of air or nitrogen, and people fusion responses generated enough power to start out more responses, and so forth, over the whole environment?

It’s difficult to say exactly how seriously it was taken. Its clear that at one point, Arthur Compton concerned about it, and that likewise, a few boffins developed persuasive thinking on effect that this couldn’t take place. James Conant, upon experiencing the searing temperature of the Trinity test, briefly reflected that possibly this rumored thing had, certainly, visited pass:

Then arrived a burst of white light that seemed to fill the sky and appeared to last for seconds. I’d expected a relatively fast and bright flash. The enormity associated with light and its own length quite stunned me personally. My instantaneous reaction was that something choose to go wrong which the thermal nuclear [sic] transformational of the atmosphere, as soon as talked about as possibility and jokingly referred to a few momemts early in the day, had actually taken place.

Which does at the very least tell us that some of those at the test remained joking about this, even around the previous couple of moments. Fermi reportedly took wagers on whether or not the bomb would destroy simply New Mexico or in fact the entire world, nonetheless it had been comprehended as being a laugh.1

The introduction associated with Konopinski, Marvin, and Teller paper of 1946. Filed under: “SCIENCE!“

Into the fall of 1946, Emil Konopinski, Cloyd Marvin, and Edward Teller (whom else?) composed up a paper describing why no detonation in the world ended up being likely to start an uncontrolled fusion response within the environment. It is really not clear to me whether this is often the logic they utilized prior towards the Trinity detonation, but it is most likely of the comparable character to it.2 In a nutshell, there clearly was only 1 fusion reaction in line with the constituents of the air which had any probability anyway (the nitrogen-nitrogen effect), and the scientists were able to show it was not to more likely to take place or spread. No matter if one makes assumptions your response had been much simpler to start than anyone thought it was probably be, it absolutely wasn’t likely to be suffered. The reaction would cool (through a selection of real mechanisms) faster than it might distribute.

That is all a typical section of Manhattan Project lore. But I suspect many who possess read with this prior to never have really see the Konopinski-Marvin-Teller paper to its end, in which they end for a less sure-of-themselves note:

There continues to be the remote possibility that several other less easy mode of burning may keep itself inside atmosphere.

Even if the effect is stopped within sphere of the few hundred meters radius, the resultant earth-shock while the radioactive contamination of the environment might become catastrophic for a world-wide scale.

One may conclude that the arguments of the paper allow it to be unreasonable to expect your N+N effect could propagate. An limitless propagation is even less likely. But the complexity associated with the argument and also the lack of satisfactory experimental foundations makes further work with the subject extremely desirable.

That’s not quite as protected as you might want, considering these boffins were in reality focusing on developing weapons plenty of that time period stronger than the Trinity unit.3

The relevant part of the Manhattan District History (cited below) interestingly links the study to the “Super” hydrogen bomb with the research into perhaps the atmosphere might be incinerated, which makes feeling, though it would be interesting to understand just how closely connected these questions in which.

There is an interesting section in the recently-declassified Manhattan District History‘s that covers the ignition for the environment issue. They repeat fundamentally the Konopinski-Marvin-Teller outcomes, and then conclude:

The impossibility of igniting the atmosphere was therefore guaranteed by science and good sense. The essential factors in these calculations, the Coulomb forces associated with nucleus, are one of the better comprehended phenomena of modern physics. The philosophic likelihood of destroying the planet earth, from the theoretical convertibility of mass into power, continues to be. The thermonuclear response, that is in order to now understood where such a catastrophe could take place, is evidently eliminated. The typical stability of matter in the observable universe contends against it. Further familiarity with the character regarding the great stellar explosions, novae and supernovae, will put light on these questions. Into the almost complete lack of genuine knowledge, it’s generally speaking believed your tremendous energy of those explosions is of gravitational rather than nuclear origin.4

Which again is at the same time reassuring and perhaps not reassuring. The footing which this knowledge was based ended up being… very good? But like good researchers these were happy, about in secret reports, to acknowledge there might actually be methods the earth become damaged through nuclear evaluating that they hadn’t considered. Intellectually honest, but in addition terrifying.

The ever relevant XKCD.

This dilemma arrived up once more prior to the process Crossroads nuclear tests in early 1946, that was to include a minumum of one underwater shot. None other than Nobel Prize-winning physicist Percy Bridgman stressed that detonating an atomic bomb under water might ignite a fusion effect into the water. Bridgman admitted their own ignorance into nuclear physics (his specialization had been high-pressure physics), but warned that:

Even the most readily useful human intellect has not imagination enough to envisage exactly what might take place once we push far into new territory. … To an outsider the tactics regarding the argument which will justify operating perhaps the slightest danger of that colossal catastrophe seems extremely weak.5

Bridgman’s worries weren’t actually your globe could be destroyed. He worried more that if the researchers showed up become cavalier about these specific things, therefore ended up being later on made public that their argument the safety associated with tests was according to flimsy evidence, that it would lead to a strong public backlash: “There might be a response against technology generally speaking which would end up in suppression of all clinical freedom while the destruction of technology itself.” Bridgman’s views were strong enough that they were forwarded to General Groves, but it isn’t clear whether or not they led to any significant modifications (though I wonder when they were the impetus for the write-up of this Konopinski-Marvin-Teller paper; the timing type of calculates, but we don’t understand).

There wasn’t lots of evidence this issue concerned the researchers an excessive amount of moving forward. They had other things on the head, like building thermonuclear weapons, and it quickly became clear that beginning a big fusion reaction by having a fission bomb is hard. Which can be, in its very own means, an answer to the initial concern: if starting a runaway fusion response on purpose is hard, and needs really particular forms of arrangements and factors to have working also on a (relatively) little scale, then beginning one in the entire atmosphere, will probably be impossible.

Operation Fishbowl, Shot Checkmate (1962) — the lowest yield weapon, but something about its perfect symmetry together with path regarding the rocket that put it into the air invokes the notion of a planet turning out to be a celebrity for me. Supply: Los Alamos Nationwide Laboratory.

Great — cross that one from the directory of possibilities. But it wouldn’t really be technology unless they also, in the course of time, re-framed issue: exactly what conditions could be needed whenever we were to turn the complete earth into a thermonuclear bomb? In 1975, a radiation physicist on University of Chicago, H.C. Dudley, published an article inside Bulletin of Atomic Scientists caution of the “ultimate catastrophe” of setting the environment burning. This received a few rebuttals and a lot of scorn, including one in pages associated with Bulletin by Hans Bethe, who’d formerly addressed this concern into the Bulletin in 1946. Interestingly, though, Dudley’s primary desire — that some body re-run these calculations on a modern computer simulation — did seem to generate research along these lines within Lawrence Livermore National Laboratory.6

In 1979, Livermore experts Thomas A. Weaver and Lowell Wood (the latter appropriately a well-known Edward Teller protege) published a paper on “Necessary conditions for the initiation and propagation of nuclear-detonation waves in airplane atmospheres,” which is a jargony way to ask issue into the name with this post. Here’s the abstract:

The basic conditions the initiation of a nuclear-detonation wave in an environment having airplane symmetry (age.g., a thin, layered fluid envelope for a earth or star) are developed. Two classes of these a detonation are identified: those where heat regarding the plasma resembles that the electromagnetic radiation permeating it, and the ones where the temperature associated with plasma is significantly greater. Necessary conditions are developed for the propagation of such detonation waves for an arbitrarily good distance. The contribution of fusion chain reactions to these processes is assessed. Through these factors, it’s shown that neither the environment nor oceans of this Earth may be designed to go through propagating nuclear detonation under any circumstances.7

Now if you simply browse the abstract, you might think it absolutely was merely another version (with fancier calculations) of this Konopinski-Marvin-Teller paper. And so they do rule out conclusively that N+N reactions would ever be energetic sufficient become self-propagating. But it is a lot more! Because unlike Konopinski-Marvin-Teller, it in fact centers around those “necessary conditions”: just what will have to be different, if you did want to have a self-propagating response?

The solution they found: if the Earth’s oceans had twenty times more deuterium than they actually contain, they could be ignited by a 20 million megaton bomb (which will be to say, a bomb utilizing the yield equivalent to 200 teratons of TNT, or perhaps a bomb 2 million times more powerful than the Tsar Bomba’s complete yield). If we assumed that this type of tool had even a fantastically efficient yield-to-weight ratio like 50 kt/kg, that’s still a tool that will weigh around a billion metric tons. To put that into viewpoint, that’s about ten times more mass than all the concrete of Three Gorges Dam.8

So there you have it — it can be done! You simply have to completely change the composition of the oceans and require a nuclear tool numerous instructions of magnitude more powerful than the gigaton bombs dreamed of by Edward Teller, then, perhaps, you’ll display the cleansing thermonuclear fire experience.

Which will be to express, this won’t be exactly how the planet dies. But don’t worry, there are plenty other plausible alternatives for human self-extinction around. They simply probably won’t be as quick.


I will be undergoing completing my book manuscript, that will be the actual work of the summer, therefore most other writing, including blog posting, is going for a back chair for a few months while We concentrate on that. The irreverent name of this post is obtained from a recurring theme into the Twitter feed of anthropology grad student Martin “Lick the Bomb” Pfeiffer, whose work you should have a look at when you yourself haven’t currently.

(The Impossibility of) Lighting Atmospheric Fire,” does a very good task of reviewing a few of the wartime conversations and clinical problems.

  • Emil Konopinski, Cloyd Marvin, and Edward Teller, “Ignition for the Atmosphere with Nuclear Bombs,” (14 August 1946), LA-602, Los Alamos National Laboratory. Konopinski and Teller additionally apparently composed an unpublished report about them in 1943. I have only seen mention of the it, as report LA-001 (suspiciously like the LA-1 that is the Los Alamos Primer), but have not seen it.
  • Teller, in October 1945, wrote the following to Enrico Fermi about the chance for a “Super” detonating the environment, as part of that which was basically a “Frequently expected Questions” in regards to the H-bomb: “Careful considerations and calculations have shown that there’s not the remotest likelihood of such an occasion [ignition associated with the atmosphere]. The concentration of energy encountered in super bomb just isn’t higher than that the atomic bomb. In my experience the risks were greater once the first atomic bomb had been tested, because our conclusions had been based in those days on longer extrapolations from understood facts. The chance associated with the super bomb doesn’t lie in physical nature however in human being behavior.” What I find most fascinating concerning this is his comment about Trinity, though Teller’s rhetorical point is an apparent one (overstate the Trinity uncertainty after the fact to be able to emphasize their certainty currently). Edward Teller to Enrico Fermi (31 October 1945), Harrison-Bundy data associated with the Development for the Atomic Bomb, 1942-1946, microfilm publication M1108 (Washington, D.C.: nationwide Archives and reports Administration, 1980), Roll 6, Target 5, Folder 76, “Interim Committee — Scientific Panel.”
  • Manhattan District History, Book 8, amount 2 (“Los Alamos – Technical”), paragraph 1.50.
  • Percy W. Bridgman to Hans Bethe, forwarded by Norris Bradbury to Leslie Groves via TWX (13 March 1946), copy in Nuclear Testing Archive, Las vegas, nevada, NV, document NV0128609.
  • H.C. Dudley, “The Ultimate Catastrophe,” Bulletin associated with Atomic researchers (November 1975), 21; Hans Bethe, “Can Air or liquid Be Exploded?,” Bulletin of the Atomic experts 1, no. 7 (15 March 1946), 2; Hans Bethe, “Ultimate Disaster?,” Bulletin of the Atomic researchers 32, no. 6 (1976), 36-37; Frank von Hippel, “Taxes Credulity (Letter towards the Editor),” Bulletin of Atomic boffins (January 1946), 2.
  • Thomas A. Weaver and Lowell Wood, “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” Physical Review A 20, no. 1 (1 July 1979), 316-328. DOI: https://doi.org/10.1103/PhysRevA.20.316.
  • Specifically, they conclude it might take a 2 x 107 Mt power release, which they call “fantastic,” to ignite an ocean of 1:300 (instead of the actual 1:6,000) concentration of deuterium. Being an apart, but the collision event that created the Chicxulub Crater (and killed the dinosaurs, etc.) is believed to own released around 5 x 1023 J, which means about 120 million megatons of TNT. So’s not a totally unreasonable energy release for the earth to encounter during the period of its presence — simply not from nuclear weapons.
  • Mapping the US nuclear war plan for 1956

    A few months back, the National Security Archive made national headlines when they released a 1956 US target list they had obtained under the Freedom of Information Act. The target list outlined over a thousand Strategic Air Command nuclear targets in the Soviet Union, Eastern Bloc, the People’s Republic of China, and North Korea. The Archive had posted a small graphic of the ones in Eastern Europe, but hadn’t digitized the full list. Several weeks ago, the people at the Future of Life Institute did just this, digitizing the complete dataset — no small task, given that these were spread over several hundred, non-OCR-able pages of smudgy, 60-year-old government documents.1

    A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

    A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

    I recently attended a conference that the FLI put on regarding nuclear war. FLI was co-founded by the MIT physicist Max Tegmark and his wife Meia (among a few others), both of whom I was glad I got to spend some time with, as they are interesting, intelligent people with interesting histories. They are interested in promoting work that decreases existential threats to the human race, which they see as possibly including things like nuclear war and nuclear winter, but also unhampered artificial intelligence, climate change, and the possible negative futures of biotechnology. These are all, of course, controversial topics (not always controversial among the same groups of people, to be sure). They’re an interesting group, and they are stirring up some interesting discussions, which I think is an unambiguously positive thing even if you don’t agree that all of these things are equally realistic threats, or threats on the same level.2

    The FLI's digitized version of the target list. Click the image to view their interactive version.

    The FLI’s digitized version of the target list. Click the image to view their interactive version.

    The target list, mapped out as the FLI did above, is already pretty impressive. While I was at the conference, I got the idea that it wouldn’t be that hard to reconfigure a few parts of the NUKEMAP code to allow me to import huge numbers of target lists in the right format. NUKEMAP already supports the targeting of multiple nukes (the feature is a little cryptic — you create a detonation, then click “launch multiple,” then move the cursor and can then create another one, and repeat as necessary), but it didn’t have any automatic way of importing a large number of settings. Once I had done that, I then thought, what would it look like if I used realistic weather data to determine the fallout patterns from surface bursts? It only took a little bit of further work to write a script that can poll OpenWeatherMap‘s public API and grab information about real-time wind speed and direction information about any given set of coordinates.3 This renders quite an impressive image, though to do this for some 1,154 targets requires a lot of RAM (about 1.5 GB) and a fast computer. So it’s not something one wants to necessarily do all the time.

    I have captured the results as a series of interactive screenshots, to better save you (and your web browser) the trouble of trying to render these yourself. You can see how changing the yield dramatically changes the fallout (assuming surface bursts, of course). The interactive viewer is available by clicking the image below, or this link.4

    Screenshot of my interactive viewer for the nuclear war plan. Click to view.

    Screenshot of my interactive viewer for the nuclear war plan. Click to view.

    I also sampled weather data from a few days in a row, to see what differences it made from a practical standpoint. It is remarkable how different wind speed and direction can vary from day to day. In some of these “simulations,” Copenhagen, Denmark, avoids fallout. In others, it does not. Under some weather conditions (and yield selections), northern Japan gets some fallout from an attack on the Soviet-controlled Kuril Islands; in others, it does not. The NUKEMAP’s fallout estimator is, of course, a very simplified model, but even with that you can get a sense of how much difference a shift in the winds can make.

    Having done that, I started to wonder: what would the casualties of such an attack look like? I don’t have population density data of the relevant areas from 1956 that has sufficient granularity to be used with my normal NUKEMAP casualty estimating script, but I figured that even the present-day population figures would be interesting. If you try to query the casualty database with over a thousand targets it just says “no,” so I wrote another script that would query it target-by-target and tally the results.

    The results were a bit staggering. I mean, I assumed it would be a large number. But they are really large numbers. Some of this is because the casualty script is double-counting “victims” when they are inside the relevant blast areas of multiple detonations. At the moment, there’s no easy way around that (even for a small number of detonations, keeping track of who is already “dead” would require a lot of time and processing power, and to do it on the scale of a thousand is just not possible with the way it is set up currently).

    An example of an area where a lot of "double-counting" is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill...

    An example of an area where a lot of “double-counting” is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill…

    On the other hand, the casualty estimate does not take into account fallout-related casualties, or the long-term casualties caused by the destruction of so much infrastructure. The target list also doesn’t tell us how many targets were, in fact, targeted redundantly with multiple weapons — the idea that it might have been “one nuke, one target” is definitely an incorrect one. Even before World War II had completely ended, US planners for nuclear war against the Soviet Union understood that not every bomb would make it to a target, and so planned for multiple weapons to be targeted on each. So “double-killing” those people in some of these locations is probably not so wrong. It likely isn’t all that crazy to think of these numbers as back-of-the-envelope estimates for what would result if you waged this kind of attack today (which is not to imply that the US would necessarily do such a thing). But I don’t want anyone to think I am implying any kind of real certainty here. I would, in fact, be dubious of anyone, at any time, implying a lot of certainty about these kinds of things, because we (fortunately) lack very much first-hand experience with this kind of “data,” outside of the results at Hiroshima and Nagasaki, which were in many ways particular to their time and place.

    Casualty figures, of course, require making assumptions about the size of the nuclear weapons used, as well as the fuzing settings (airbursts generate far less downwind fallout in comparison to surface bursts, but they can greatly increase the casualties for people in civilian structures). For 1956, there would have been a “mix” of yields and types of weapons. We don’t have data on that to my knowledge. As a simplifying assumption, I just ran the casualty calculation with a number of yields, and with both surface burst and airbursts (optimized to increase the range of the 5 psi blast area) options. For the sake of space and avoiding the appearance of false precision, I have rounded them to their nearest million below:

    surface burstairburst
    injuriesfatalitiesinjuriesfatalities
    10 Mt259239517304
    5 Mt210171412230
    1 Mt12070239111
    500 kt894618577
    100 kt39169430
    50 kt25106619

    At first I thought some of these numbers just seemed fantastical. Russia today only has a population of 140 million or so. How could we get up to numbers so high? Some of this is, again, because of double-counting, especially with the very big bomb — if you run a 10 Mt bomb on Moscow kills 5.5 million people, and injures 4 million, by NUKEMAP’s estimate, which combined is 70% of the 13 million people in the area of the 1 psi blast radius of such a weapon. (If that seems high, remember that a 10 Mt bomb goes well outside the city of Moscow itself — the Great Moscow Metro Region is about 16 million people total.) Since a large number of nukes were targeted around Moscow, that’s a lot of double counting, especially when you use them with such high-yield weapons.

    So the very-big numbers I would take with a very hefty grain of salt. NUKEMAP’s casualty estimator really isn’t meant for guessing multiple, overlapping damage areas. At best, it attempts to give back-of-the-envelope estimates for single detonations. Separately, the US arsenal at the time was around 10,000 megatons worth of destructive power. So they obviously couldn’t have been (and wouldn’t have been) all multi-megaton monsters. But, all the same, I don’t think it’s at all improbable that the multi-megaton monsters that were in the arsenal would have been targeted at heavily populated regions, like Moscow. Especially given the fact that, again, there would have been multiple nukes aimed at each target.

    I also thought it would be interesting to take the casualties and break them apart by region. Here’s where I found some really startling results, using a 1 Megaton (1,000 kiloton) airburst as my “model” detonation, again in millions:

    injuriesfatalities
    Soviet Union11155
    Warsaw Pact2310
    China + North Korea10446
    239111

    To make this point more clearly: 820 of the 1,154 targets were inside the Soviet Union proper. They are responsible for 48% of the casualties in the above scenario. Non-Soviet countries in the Warsaw Pact (Eastern Europe, more or less), were responsible for “only” 188 of the targets, and 9% of the casualties. China and North Korea had only 146 of the targets, but were accountable for 43% of the casualties. Which is to say, each “detonation” in the USSR on average produced around 203,000 casualties on average, each one in Eastern Europe around 176,000, and each one in Asia is over 1 million. That’s kind of bananas.

    Now, these use modern (2011) population density figures, not those of 1956. But it’s still a pretty striking result. Why would this be? Partially because the Asian targets seem to be primarily in large cities. Many of the Soviet targets, by contrast, are of pretty isolated areas — remote military airfields in some cases — that only kill a few hundred people. It would make for a very interesting study to really get into the “weeds” of this target plan, and to sort out — systematically — what exactly was being targeted in each location, as best as we can. If we did that, we’d possibly be able to guess at whether an airburst or a surface burst was called for, and potentially even be able to judge target priorities, though the “bomb-as-you-go” method of attack used in the 1950s probably means that even low-priority targets would get nuked early on if they were on a path to a higher-priority one.

    Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

    Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

    What does this exercise tell us? Two things, in my mind. One, this 1956 target list is pretty nuts, especially given the high-yield characteristics of the US nuclear stockpile in 1956. This strikes me as going a bit beyond mere deterrence, the consequence of letting military planners have just a little bit too much freedom in determining what absolutely had to have a nuclear weapon placed on it.

    The second is to reiterate how amazing it is that this got declassified in the first place. When I had heard about it originally, I was pretty surprised. The US government usually considered target information to be pretty classified, even when it is kind of obvious (we target Russian nuclear missile silos? You don’t say…). The reason, of course, is that if you can go very closely over a target list, you can “debug” the mind of the nuclear strategist who made it — what they thought was important, what they knew, and what they would do about their knowledge. Though times have changed a lot since 1956, a lot of those assumptions are probably still at least partially valid today, so they tend to keep that sort of thing under wraps. These NUKEMAP “experiments” are quick and cheap approaches to making sense of this new information, and as the creator of the NUKEMAP, let me say that I think “quick and cheap” is meant as a compliment. To analyze something quickly and cheaply is to spark new ideas quickly and cheaply, and you can always subject your new ideas to more careful analytical scrutiny once you’ve had them. I hope that someone in the future will give this target data some real careful attention, because I have no doubt that it still contains many insights and surprises.

    Notes
    1. Because there has been some confusion about what this list is, I want to clarify a bit here. It is a “Weapons Requirements Study,” which is to say, it’s the way in which the US Air Force Strategic Air Command said, “here are all the things we might want to nuke, if we could.” The might and if we could parts are important, because they are what makes this difference from an actual war plan, which is to say, “what we would actually do in the event of a nuclear war.” The might means that not necessarily all of these targets would have been nuked in any given war situation, but indicates the sorts of things that they considered to be valid targets. The if we could means that this would require more weapons than they could afford to use at the time. In 1956, the US stockpile contained “only” 3,692 warheads. This target list is meant to imply that it needed to be bigger, that is, that by 1959 they would want more weapons to be produced. So by 1959 they had 12,298 weapons — more than three times as many. Why so many weapons for the same number of targets? Because, as noted in the post below, the idea of one-nuke, one-target isn’t how they planned it. Anyway, the long and short of it is, this isn’t exactly the same thing as a war plan, much less for 1956. It may over-count, but it also probably under-counts (because it ignores tactical use, targets of opportunity, the overkill that would occur when targets were multiple-targeted, etc.). But it does give you a flavor of the war planning that was going on, and is probably closer to that than any other document that has been released for this time. As for how that would affect what would have happened in 1956, it’s hard to say, but this is in line with many of the other things we know about nuclear war planning at that time, so I think it is a fair illustration.
    2. I think my students were probably the most happy that FLI digitized all of this target data because if they hadn’t, I was going to force my undergrads who take my data visualization course to do it in the name of a practical example of what “crowdsourcing” can mean.
    3. In some cases, OpenWeatherMap did not have information about some of the coordinates. In such cases, the script averaged the missing point from several surrounding points, weighting them by distance. The results it gives in doing this seem plausible enough. For each time I ran it, there were only about two or three missing pieces of data.
    4. For those who want to look at the dataset themselves, the CSV file that the visualization uses is available here.