Secrecy, verification, and purposeful ignorance

The history of nuclear secrecy is an interesting topic for a lot of reasons, but one of the more wonky ones is that it is an inversion of the typical studies that traditionally are done in the history of science. The history of science is usually a study of how knowledge is made and then circulates; a history of secrecy is about how knowledge is made and then is not circulated. Or, at least, its non-circulation is attempted, to various degrees of success. These kinds of studies are still not the “norm” amongst historians of science, but in recent years have become more common, both because historians have come to understand that secrecy is often used by scientists for various “legitimate” reasons (i.e., preserving priority), and because historians have come to understand that the study of deliberately-created ignorance has been a major theme as well (e.g., Robert Proctor has coined the term agnotology to describe the deliberate actions of the tobacco industry to foster ignorance and uncertainty regarding the link between lung cancer and cigarettes).

The USS Nautilus with a nice blob of redaction. No reactor core for you!

The USS Nautilus with a nice blob of redaction. No reactor for you! From a 1951 hearing of the Joint Committee on Atomic Energy — apparently the reactor design is still secret even today?

What I find particularly interesting about secrecy, as a scholar, is that it is like a sap or a glue that starts to stick to everything once you introduce a little bit of it. Try to add a little secrecy to your system and pretty soon more secrecy is necessary — it spreads. I’ve remarked on this some time back, in the context of Los Alamos designating all spheres as a priori classified: once you start down the rabbit-hole, it becomes easier and easier for the secrecy system to become more entrenched, even if your intentions are completely pure (and, of course, more so if they are not).

In this vein, I’ve for awhile been struck by the work of some friends of mine in the area of arms control work known as “zero-knowledge proofs” (and the name alone is an attention-grabber). A zero-knowledge proof is a concept derived from cryptography (e.g., one computer proves to another that it knows a secret, but doesn’t give the secret away in the process), but as applied to nuclear weapons, it is roughly as follows: Imagine a hypothetical future where the United States and Russia have agreed to have very low numbers of nuclear warheads, say in the hundreds rather than the current thousands. They want mutually verify each other’s stockpiles are as they say they are. So they send over an inspector to count each other’s warheads.

Already this involves some hypotheticals, but the real wrench is this: the US doesn’t want to give its nuclear design secrets away to the Russian inspectors. And the Russians don’t want to give theirs to the US inspectors. So how can they verify that what they are looking at are actually warheads, and not, say, steel cans made to look like warheads, if you can’t take them apart?

Let's imagine you had a long line of purported warheads, like the W80, shown here. How can you prove there's an actual nuke in each can, without knowing or learning what's in the can? The remarkable W80s-in-a-bunker image is from a blog post by Hans Kristensen at Federation of American Scientists.

Let’s imagine you had a long line of purported warheads, like the W80, shown here. How can you prove there’s an actual nuke in each can, without knowing or learning what’s in the can? The remarkable W80s-in-a-bunker image is from a blog post by Hans Kristensen at Federation of American Scientists.

Now you might ask why people would fake having warheads (because that would make their total number of warheads seem higher than it was, not lower), and the answer is usually about verifying warheads put into a queue for dismantlement. So your inspector would show up to a site and see a bunch of barrels and would be told, “all of these are nuclear warheads we are getting rid of.” So if those are not actually warheads then you are being fooled about how many nukes they still have.

You might know how much a nuclear weapon ought to weigh, so you could weigh the cans. You might do some radiation readings to figure out if they are giving off more or less what you expect a warhead might be giving you. But remember that yours inspector doesn’t actually know the configuration inside the can: they aren’t allowed to know how much plutonium or uranium is in the device, or what shapes it is in, or what configuration it is in. So this will put limitations both on what you’re allowed to know beforehand, and what you’re allowed to measure.

Now, amusingly, I had written all of the above a few weeks ago, with a plan to publish this issue as its own blog post, when one of the groups came out with a new paper and I was asked whether I would write about it for The New Yorker‘s science/tech blog, Elements. So you can go read the final result, to learn about some of the people (Alexander Glaser, Sébastien Philippe, and R. Scott Kemp) who are doing work on this: “The Virtues of Nuclear Ignorance.” It was a fun article to write, in part because I have known two of the people for several years (Glaser and Kemp) and they are curious, intelligent people doing really unusual work at the intersection of technology and policy.

Virtues of Nuclear Ignorance

I won’t re-describe their various methods of doing it here; read the article. If you want to read their original papers (I have simplified their protocols a bit in my description), you can read them here: the original Princeton group paper (2014), the MIT paper from earlier this year (2016), and the most recent paper from the Princeton group with Philippe’s experiment (2016).

In the article, I use a pine tree analogy to explain the zero-knowledge proof. Kemp provided that. There are other “primers” on zero-knowledge proofs on the web, but most of them are, like many cryptographic proofs, not exactly intuitive, everyday scenarios. One of the ones I considered using in the article was a famous one regarding a game of Where’s Waldo:

Imagine that you and I are looking at a page in one book of Where’s Waldo. After several minutes, you become frustrated and declare that Waldo can’t possibly be on the page. “Oh, but he is,” I respond. “I can prove it to you, but I don’t want to take away the fun of you finding him for yourself.” So I get a large piece of paper and cut out a tiny hole in exactly the shape of Waldo. While you are looking away, I position it so that it obscures the page but reveals the striped wanderer through the hole. That is the essence of a zero-knowledge proof — I prove I’m not bluffing without revealing anything new to you.

I found Waldo on the map of Troy. How can I prove it without giving his location away? A digital version of the described "proof": I found his little head and cut it out with Photoshop. But how do you know that's his head from this image? (Waldo from Where's Waldo)

I found Waldo in the Battle of Troy. How can I prove it without giving his location away? A digital version of the described “proof”: I found his little head and cut it out with Photoshop. In principle, you now know I really found him, without knowing where he is… but might that face be from a different Waldo page? (Image from Where’s Waldo)

But a true zero-knowledge proof, though, would also avoid the possibility of faking a positive result, which the Waldo example fails: I might not know where Waldo is on the page we are mutually looking at, but while you are not looking, I could set up the Waldo-mask on another page where I do know he is hiding. Worse yet, I could carry with me a tiny Waldo printed on a tiny piece of paper, just for this purpose. This might sound silly, but if there were stakes attached to my identification of Waldo, cheating would become expected. In the cryptologic jargon, any actual proof need to be both “complete” (proving positive knowledge) and “sound” (indicating false knowledge). Waldo doesn’t satisfy both.

Nuclear weapons issues have been particularly fraught by verification problems. The first attempt to reign in nuclear proliferation, the United States’ Baruch Plan of 1946, failed in the United Nations in part because it was clear that any meaningful plan to prevent the Soviet Union from developing nuclear weapons would involve a freedom of movement and inspection that was fundamentally incompatible with Stalinist society. The Soviet counter-proposal, the Gromyko Plan, was essentially a verification-free system, not much more than a pledge not to build nukes, and was subsequently rejected by the United States.

The Nuclear Non-Proliferation Treaty has binding force, in part, because of the inspection systems set up by the International Atomic Energy Agency, who physically monitor civilian nuclear facilities in signatory nations to make sure that sensitive materials are not being illegally diverted to military use. Even this regime has been controversial: much of the issues regarding Iran revolve around the limits of inspection, as the Iranians argue that many of the facilities the IAEA would like to inspect are militarily secret, though non-nuclear, and thus off-limits.

From the Nature Communications paper — showing (at top) the principle of what a 2D example would look like (with Glaser's faux Space Invader) — the complement is the "preload" setting mentioned in my New Yorker article, so that when combined with the new reading, ought to result in a virtually null reading. At bottom, the setup of the proof-of-concept version, with seven detectors.

From the Nature Communications paper — showing (at top) the principle of what a 2D example would look like (with Glaser’s faux Space Invader) — the complement is the “preload” setting mentioned in my New Yorker article, so that when combined with the new reading, ought to result in a virtually null reading. At bottom, the setup of the proof-of-concept version, with seven detectors.

One historical example about the importance of verification comes from the Biological Weapons Convention in 1972. It contained no verification measures at all: the USA and USSR just pledged not to develop biological weapons (and the Soviets denied having a program at all, a flat-out lie). The United States had already unilaterally destroyed its offensive weapons prior to signing the treaty, though the Soviets long expressed doubt that all possible facilities had been removed. The US lack of interest in verification was partially because it suspected that the Soviets would object to any measures to monitor their work within their territory, but also because US intelligence agencies didn’t really fear a Soviet biological attack.

Privately, President Nixon referred to the BWC as a “jackass treaty… that doesn’t mean anything.” And as he put it to an aide: “If somebody uses germs on us, we’ll nuke ‘em.”1

But immediately after signing the treaty, the Soviet Union launched a massive expansion of their secret biological weapon work. Over the years, they applied the newest genetic-engineering techniques to the effort of making whole new varieties of pathogens. Years later, after all of this had come to light and the Cold War had ended, researchers asked the former Soviet biologists why the USSR had violated the treaty. Some had indicated that they had gotten indications from intelligence officers that the US was probably doing the same thing, since if they weren’t, what was the point of a treaty without verification?

A bad verification regime, however, can also produce false positives, which can be just as dangerous. Consider Iraq, where the United States set up a context in which it was very hard for the Iraqi government to prove that it was not developing weapons of mass destruction. It was easy to imagine ways in which they might be cheating, and this, among other factors, drove the push for the disastrous Iraq War.

In between these extremes is the more political considerations: the possibility of cheating at treaties invites criticism and strife. It gives ammunition to those who would oppose treaties and diplomacy in general. Questions about verification have plagued American political discourse about the US-Iranian nuclear deal, including the false notion that Iran would be allowed to inspect itself. If one could eliminate any technical bases for objections, it has been argued, then at least those who opposed such things on principle would not be able to find refuge in them.

The setup from Kemp, et al. The TAI is the Treaty Accountable Item, i.e. the warhead you are testing.

The setup from Kemp, et al. The TAI is the Treaty Accountable Item, i.e. the warhead you are testing.

This is where the zero-knowledge protocols could come in. What’s interesting to me, as someone who studies secrecy, is if the problem of weapon design secrecy were removed, then this whole system would be unnecessary. It is, on some level, a contortion: an elaborate work-around to avoid sharing, or learning, any classified information. Do American scientists really think the Russians have any warhead secrets that we don’t know, or vice versa? It’s possible. A stronger argument for continued secrecy is that there are ways that an enemy’s weapons could be rendered ineffective if their exact compositions were known (neutrons, in the right quantity, can “kill” a warhead, causing its plutonium to heat and expand, and causing its chemical high-explosives to degrade; if you knew exactly what level of neutrons would kill a nuke, it would play into strategies of trying to defend against a nuclear attack).

And, of course, that hypothetical future would include actors other than the United States and Russia: the other nuclear powers of the world are less likely to want to share nuclear warhead schematics with each other, and an ideal system could be used by non-nuclear states involved in inspections as well. But even if everyone did share their secrets, such verification systems might still be useful, because they would eliminate the need for trust altogether, and trust is never perfect.

A little postscript on the article: I want to make sure to thank Alex Glaser, Sébastien Philippe, and R. Scott Kemp for devoting a lot of their weekends to making sure I actually understood the underlying science of their work to write about it. Milton Leitenberg gave me a lot of valuable feedback on the Biological Weapons Convention, and even though none of that made it into the final article, it was extremely useful. Areg Danagoulian, a colleague of Kemp’s at MIT who has been working on their system (and who first proposed using nuclear resonance fluorescence as a means of approaching this question), didn’t make it into the article, but anyone seriously interested in these protocols should check out his work as well. And of course the editor I work with at New Yorker, Anthony Lydgate, should really get more credit than he does for these articles, and on this one in particular managed to take the unwieldy 5,000 word draft I sent him and chop it down to 2,000 words very elegantly. And, lastly, something amusing — I noticed that Princeton Plasma Physics Laboratory released a film of Sébastien talking about the experiment. Next to him is something heavily pixellated out… what could it be? It looks an awful lot like a copy of Unmaking the Bomb, a book created by Glaser and other Princeton faculty (and I made the cover), next to him…

Notes
  1. On the “jackass treaty,” see Milton Leitenberg and Raymon Zilinskas, The Soviet Biological Weapons Program: A History (Harvard University Press, 2012), quoted on 537.  On “we’ll nuke ’em,” the aide was William Safire. For his account, see William Safire, “Iraq’s Tons of Germs,” New York Times (13 April 1995).

Silhouettes of the bomb

You might think of the explosive part of a nuclear weapon as the “weapon” or “bomb,” but in the technical literature it has its own kind of amusingly euphemistic name: the “physics package.” This is the part of the bomb where the “physics” happens — which is to say, where the atoms undergo fission and/or fusion and release energy measured in the tons of TNT equivalent.

Drawing a line between that part of the weapon and the rest of it is, of course, a little arbitrary. External fuzes and bomb fins are not usually considered part of the physics package (the fuzes are part of the “arming, fuzing, and firing” system, in today’s parlance), but they’re of course crucial to the operation of the weapon. We don’t usually consider the warhead and the rocket propellant to be exactly the same thing, but they both have to work if the weapon is going to work. I suspect there are many situations where the line between the “physics package” and the rest of the weapon is a little blurry. But, in general, the distinction seems to be useful for the weapons designers, because it lets them compartmentalize out concerns or responsibilities with regards to use and upkeep.

Physics package silhouettes of some of the early nuclear weapon variants. The Little Boy (Mk-1) and Fat Man (Mk-3) are based on the work of John Coster-Mullen. All silhouette portraits are by me — some are a little impressionistic. None are to any kind of consistent scale.

The shape of nuclear weapons was from the beginning one of the most secret aspects about them. The casing shapes of the Little Boy and Fat Man bombs were not declassified until 1960. This was only partially because of concerns about actual weapons secrets — by the 1950s, the fact that Little Boy was a gun-type weapon and Fat Man was an implosion weapon, and their rough sizes and weights, were well-known. They appear to have been kept secret for so long in part because the US didn’t want to draw too much attention to the bombing of the cities, in part because we didn’t want to annoy or alienate the Japanese.

But these shapes can be quite suggestive. The shapes and sizes put limits on what might be going on inside the weapon, and how it might be arranged. If one could have seen, in the 1940s, the casings of Fat Man and Little Boy, one could pretty easily conjecture about their function. Little Boy definitely has the appearance of a gun-type weapon (long and relatively thin), whereas Fat Man clearly has something else going on with it. If all you knew was that one bomb was much larger and physically rounder than the other, you could probably, if you were a clever weapons scientist, deduce that implosion was probably going on. Especially if you were able to see under the ballistic casing itself, with all of those conspicuously-placed wires.

In recent years we have become rather accustomed to seeing pictures of retired weapons systems and their physics packages. Most of them are quite boring, a variation on a few themes. You have the long-barrels that look like gun-type designs. You have the spheres or spheres-with-flat ends that look like improved implosion weapons. And you then have the bullet-shaped sphere-attached-to-a-cylinder that seems indicative of the Teller-Ulam design for thermonuclear weapons.

Silhouettes of compact thermonuclear warheads. Are the round ends fission components, or spherical fusion components? Things the nuke-nerds ponder.

There are a few strange things in this category, that suggest other designs. (And, of course, we don’t have to rely on just shapes here — we have other documentation that tells us about how these might work.) There is a whole class of tactical fission weapons that seem shaped like narrow cylinders, but aren’t gun-type weapons. These are assumed to be some form of “linear implosion,” which somewhat bridges the gap between implosion and gun-type designs.

All of this came to mind recently for two reasons. One was the North Korean photos that went around a few weeks ago of Kim Jong-un and what appears to be some kind of component to a ballistic case for a miniaturized nuclear warhead. I don’t think the photos tell us very much, even if we assume they are not completely faked (and with North Korea, you never know). If the weapon casing is legit, it looks like a fairly compact implosion weapon without a secondary stage (this doesn’t mean it can’t have some thermonuclear component, but it puts limits on how energetic it can probably be). Which is kind of interesting in and of itself, especially since it’s not every day that you get to see even putative physics packages of new nuclear nations.

Stockpile milestones chart from Pantex's website. Lots of interesting little shapes.

Stockpile milestones chart from Pantex’s website. Lots of interesting little shapes.

The other reason it came to mind is a chart I ran across on Pantex’s website. Pantex was more or less a nuclear-weapons assembly factory during the Cold War, and is now a disassembly factory. The chart is a variation on one that has been used within the weapons labs for a few years now, my friend and fellow-nuclear-wonk Stephen Schwartz pointed out on Twitter, and shows the basic outlines of various nuclear weapons systems through the years. (Here is a more up-to-date one from the a 2015 NNSA presentation, but the image has more compression and is thus a bit harder to see.)

For gravity bombs, they tend to show the shape of the ballistic cases. For missile warheads, and more exotic weapons (like the “Special Atomic Demolition Munitions,” basically nuclear land mines — is the “Special” designation really necessary?), they often show the physics package. And some of these physics packages are pretty weird-looking.

Some of the weirder and more suggestive shapes in the chart. The W30 is a nuclear land mine; the W52 is a compact thermonuclear warhead; the W54 is the warhead for the Davy Crockett system, and the W66 is low-yield thermonuclear weapon used on the Sprint missile system.

A few that jump out as especially odd:

  • PowerPoint Presentation

    Is the fill error meaningful, or just a mistake? Can one read too much into a few blurred pixels?

    In the Pantex version (but not the others), the W59 is particular in that it has an incorrectly-filled circle at the bottom of it. I wonder if this is an artifact of the vectorization process that went into making these graphics, and a little more indication of the positioning of things than was intended.

  • The W52 has a strange appearance. It’s not clear to me what’s going on there.
  • The silhouette of the W30 is a curious one (“worst Tetris piece ever” quipped someone on Twitter), though it is of an “Atomic Demolition Munition” and likely just shows some of the peripheral equipment to the warhead.
  • The extreme distance between the spherical end (primary?) and the cylindrical end (secondary?) of the W-50 is pretty interesting.
  • The W66 warhead is really strange — a sphere with two cylinders coming out of it. Could it be a “double-gun,” a gun-type weapon that decreases the distance necessary to travel by launching two projectiles at once? Probably not, given that it was supposed to have been thermonuclear, but it was an unusual warhead (very low-yield thermonuclear) so who knows what the geometry is.

There are also a number of warheads whose physics packages have never been shown, so far as I know. The W76, W87, and W88, for example, are primarily shown as re-entry vehicles (the “dunce caps of the nuclear age” as I seem to recall reading somewhere). The W76 has two interesting representations floating around, one that gives no real feedback on the size/shape of the physics package but gives an indication of its top and bottom extremities relative to other hardware in the warhead, another that portrays a very thin physics package that I doubt is actually representational (because if they had a lot of extra space, I think they’d have used it).1

Some of the more simple shapes — triangles, rectangles, and squares, oh my!

Some of the more simple shapes — triangles, rectangles, and squares, oh my!

What I find interesting about these secret shapes is that on the one hand, it’s somewhat easy to understand, I suppose, the reluctance to declassify them. What’s the overriding public interest for knowing what shape a warhead is? It’s a hard argument to make. It isn’t going to change how to vote or how we fund weapons or anything else. And one can see the reasons for keeping them classified — the shapes can be revealing, and these warheads likely use many little tricks that allow them to put that much bang into so compact a package.

On the other hand, there is something to the idea, I think, that it’s hard to take something seriously if you can’t see it. Does keeping even the shape of the bomb out of public domain impact participatory democracy in ever so small a way? Does it make people less likely to treat these weapons as real objects in the world, instead of as metaphors for the end of the world? Well, I don’t know. It does make these warheads seem a bit more out of reach than the others. Is that a compelling reason to declassify their shapes? Probably not.

As someone on the “wrong side” of the security fence, I do feel compelled to search for these unknown shapes — a defiant compulsion to see what I am not supposed to see, perhaps, in an act of petty rebellion. I suspect they look pretty boring — how different in appearance from, say, the W80 can they be? — but the act of denial makes them inherently interesting.

Notes
  1. One amusing thing is that several sites seem to have posted pictures of the arming, fuzing, and firing systems of these warheads under the confusion that these were the warheads. They are clearly not — they are not only too small in their proportions, but they match up exactly to declassified photos of the AF&F systems (they are fuzes/radars, not physics packages).