Wednesday, November 12, 2008

Professor Bridgman's Introduction to the Physics of Nuclear Weapons Effects



The feeling you get when you open and read Dr Charles J. Bridgman's Introduction to the Physics of Nuclear Weapons Effects is the same amazement that you get when you read Glasstone and Dolan's The Effects of Nuclear Weapons, Brode's Review of Nuclear Weapons Effects, or Dolan's Capabilities of Nuclear Weapons.

I read Glasstone and Dolan's book in 1988 when aged 16 on the recommendation of the local Emergency Planning Officer, Brode's paper at the university library in 1990, and Dolan's manual in 1993 after being told by the library staff at AWE Aldermaston that it had been declassified (I had requested the earlier TM 23-200 Capabilities of Atomic Weapons which had been cited in various Home Office Scientific Advisory Branch civil defence reports). So it is quite a while since I saw something as comprehensive as Dr Bridgman's book on this subject!



The most surprising thing about most of the published nuclear weapons effects literature (nearly all originating from Glasstone's book) is the theoretical nature of the information provided. I had expected something a lot briefer but based more directly on nuclear test data, and was a little disappointed that the amount of nuclear test data in the book was relatively limited, and that most of the graphs were just curves without any data points shown: the reader has to trust the publication and the editors. In addition, the different kinds of nuclear explosion (underwater, surface burst, air burst, high altitude) were not dealt with separately: instead, you had bits and pieces about each kind of burst scattered in each chapter which is concerned with one type of effect (blast, thermal, nuclear radiation, EMP, etc.). This misleadingly gives the impression to the general reader that all kinds of nuclear explosions produce similar effects, with merely some quantitative differences in the relative magnitudes of those different effects. Nothing could be further from reality: just compare an underwater burst to a high altitude burst!



I think that to improve public understanding of nuclear weapons effects for civil defence purposes, a handbook is needed which has the effects phenomenology (not the damage criteria) organized by burst type (chapter 1: space bursts, chapter 2: air bursts, chapter 3: surface bursts, chapter 4: underground bursts, chapter 5: underwater bursts) so there can be no confusion. I don't think that this will involve much repetition because the blast and thermal effects of air bursts are quite different to those of surface bursts (different blast wave waveforms and different thermal radiation pulses), so there is no overlap. In addition, while the physics needs to be explained concisely as done by Dr Bridgman's book, there is a need for all theoretical prediction graphs given to be justified by the incorporation of nuclear test data points, so the user can judge the reliability of the source of the predictions.


First, it's a book that's more important than Glasstone and Dolan 1977, and about as important for civil defence as Dolan's Capabilities of Nuclear Weapons or Brode's Review of Nuclear weapons Effects. The reason is that it is quantitative. professor Bridgman doesn't analyze all the nuclear test data, but he does provide most of the theoretical physics equations. To the extent that Bridgman's book is based upon solid physical laws and solid facts - provided that the equations are applied with the right assumptions and that the mechanism they are applied to is the most important mechanism for the effect being considered - it is valuable and reliable.


Professor Bridgman graduated from the U.S. Naval Academy in 1952, did an MSc in nuclear engineering at North Carolina State University in 1958, and then did a PhD in nuclear engineering there in 1963. He is Professor Emeritus of Nuclear Engineering at the Department of Engineering Physics, U.S. Air Force Institute of Technology (AFIT), Wright-Patterson Air Force Base, Ohio. His research specialism is the effects of nuclear weapons, and he has published papers on fallout, radiation effects on electronics and sunlight attenuation in nuclear winter.



His book 'Introduction to the Physics of Nuclear Weapons Effects', 1st edition, is a 535 pages long hardbound textbook published by the U.S. Defense Threat Reduction Agency in July 2001 as a single volume which I bought on the internet at www.Amazon.com from a seller in America. In December 2008, Volume 2 of a revised edition of the book, containing chapters 2, 3 and 4 (these chapters deal in mathematical detail with the physics design of nuclear weapons, such as fission efficiency calculations as a bomb core expands and loses neutrons, compression of nuclear cores by chemical explosive implosion systems, tritium boosting of fission reactions, and the detailed physics of Teller-Ulam fusion systems) was published (252 pages). A revision of the weapons effects chapters (1 and 5-15) is currently in preparation and will be issued separately as Volume 1 when completed.


The first edition is not secret but is marked 'Distribution Limited' on the dust wrapper, front hard cover and on the title page: 'Distribution of this book is authorized to U.S. Government agencies and their Contractors; Administrative or Operational Use, July 2001. Other requests for this book shall be referred to Director, Defense Threat Reduction Agency, 8725 John J. Kingman Road, Ft. Belvoir, VA 22060-6201.'

As a result, I will not be reviewing the mathematical physics of chapters 2, 3 and 4 of the book, pages 72-195 of the first edition, which deal with nuclear explosive details themselves. Those chapters, while unclassified, contain extensive detailed calculations of the (a) neutron multiplication factors in plutonium and uranium spheres of various sizes and densities (implosion compressions), (b) the effect of neutron reflectors (e.g., beryllium) on the fissile core behaviour, (c) the calculation of 'alpha' (the neutron multiplication rate of a fission reaction, measured by the time between successive fission 'generations'), (d) the implosive shock pressure needed to compress metallic uranium and plutonium in various kinds of implosion weapons, (e) the effect of kinetic dissassembly and fuel burn up on fission efficiency in a nuclear explosion, and (f) the calculation of fusion yields by the compression of fuel capsules using ablative X-ray radiation recoil from a fission bomb, and by the 'boosting' system whereby a small amount of fusion material in the centre of a fissile bomb core releases high energy neutrons which greatly increase the efficiency of the fission reactions. All of these topics are exactly the kind of thing I do not want to discuss in mathematical detail on this blog. The mathematical physics information in the book on these subject areas may not be enough to qualify someone to design the latest Los Alamos thermonuclear warhead, but it is certainly not the kind of thing anyone would want to make easily available to any terrorist/rogue nation which already had access to fissile material. I'll avoid the details of three chapters altogether here, since the interest is improved understanding of nuclear weapons effects for civil defence.

The front flap of the dust wrapper states that the book evolved from the class notes for courses given to graduate students at AFIT:

'The notes were motivated by the lack of a textbook covering all of the effects of nuclear weapons. The well known Effects of Nuclear Weapons by Glasstone and Dolan offers complete coverage but, by design, does not develop the physical and mathematical modelling underlying those effects. If Glasstone and Dolan were regarded as "Effects 101", then this book is "Effects 201".

'One chapter is devoted to each of the following weapon effects: X-rays, thermal, air blast, underground shock, under water shock, nuclear radiation, the electromagnetic pulse, residual radiation (fall-out), dust and smoke, and space effects. ... Empirical [non-theoretical, data generalizing] formulae are avoided as much as possible ...

'This book complements the Handbook of Nuclear Weapons Effects: Calculational Tools Abstracted from DWSA's Effects Manual One (EM-1) [Defense Special Weapons Agency, Alexandria, VA, September 1996] edited by John Northrop. That handbook is a collection of methods and data for predicting nuclear weapon free field intensities and specific target responses. The present book develops the theory behind those calculations found in the handbook.'

The back flap of the dust wrapper states:

'Charles J. Bridgman ... was posted to the Armed Forces Special Weapons Project at Sandia Base where he trained as an atomic weapons officer. He was assigned to the Strategic Air Command as a Nuclear Officer responsible for the Mark 5, 6 and 7 weapons and later was a member of the military assembly team to become operational on the Mark 17, the first operational thermonuclear weapon. Dr. Bridgman joined the AFIT faculty in 1959 as an Air Force Captain. In 1963 he became a civilian member of the Department of Engineering Physics. He was appointed professor and chair of the nuclear engineering committee in 1968. Dr bridgman chaired the nuclear engineering programme for 20 years. During that time he led the conversion of the AFIT nuclear engineering program from a nuclear-power-reactor focused curricula to a nuclear-effects focussed curricula. During those years, he was a frequent lecturer and consultant to the Air Force Weapons Laboratory at Kirkland AFB, New Mexico. ... He has chaired over 100 AFIT MS theses and 14 PhD dissertions. Dr. Bridgman served as the School Associate Dean for research from 1989 to 1997. He retired from that position in 1997 and continues, since that date, to maintain office hours at AFIT as a Professor Emeritus. Dr. Bridgman is a Fellow of the American nuclear Society.'

The fifteen chapters are headed:

1: Atomic and Nuclear Physics Fundamentals (pages 1-71)
2: Fission Explosives: Neutronics (pages 72-134)
3: Fission Explosives: Thermodynamics (pages 135-169)
4: Fusion Explosives (pages 170-195)
5: X-Ray Effects (pages 196-236)
6: Thermal Effects (pages 237-270)
7: Blast Effects in Air (pages 271-304)
8: Underground Effects (pages 305-336)
9: Underwater Effects (pages 337-348)
10: Effects of Nuclear Radiation (pages 249-371)
11: The Electromagnetic Pulse (pages 372-397)
12: Residual Radiation (pages 398-452)
13: Dust and Smoke Effects (pages 453-464)
14: Space Effects (pages 465-492)
15: Survivability Analysis (pages 493-509)

The first impression you get is that the book is a more in-depth treatment of the subjects covered by Glasstone and Dolan, excluding the damage photographs.

In the Preface, Dr Bridgman writes: 'Some comments about Chapters 2, 3 and 4 are in order. The design of nuclear explosives in the United States is by law the exclusive province of the Department of Energy, not the Department of Defense. This book is intended for DoD students. The inclusion of Chapters 2, 3 and 4 is not intended to prepare students to become bomb designers. Those chapters would be woefully inadequate for that task. Rather the inclusion of these three chapters is based on the author's firm conviction that to understand the effects of a nuclear explosion, one has to understand the source. For this reason, Chapters 2, 3 and 4 consist of elementary models of the physical processes occurring during the fission and fusion explosion. They do not include design considerations.'

The Acknowledgements pages show that a long list of experts checked, contributed suggestions, and corrected the draft version of the book.

1: Atomic and Nuclear Physics Fundamentals (pages 1-71)

At first glance, this chapter looks like routine basic physics. However, a close reading shows that it is very carefully written, and physically deep as well as being more relevant to the subject matter of the book than the typical atomic and nuclear physics textbook.

On page 3, Figure 1-1, 'Energy partition in uranium as a function of temperature', shows at temperatures below 100,000 K, 100% of the energy in uranium is in the kinetic energy of the material (ions and electrons). But at higher temperatures, the energy carried between those charges by radiation starts to become more important. At 1,000,000 K temperature (100 eV energy per particle) 1% of the total energy density is present as photon radiation and 99% is in the kinetic energy of moving matter. At 10,000,000 K (1 keV), 8% is in radiation and 92% in matter. At a temperature of about 32,000,000 K (3.2 keV), which is about twice the core temperature of the sun, there is an even split with 50% of the energy in uranium plasma carried by x-ray radiation and 50% by the ions and electrons of the matter present. Finally, at 100,000,000 K (10 keV), only 9% of the energy density in the uranium is present in the kinetic energy of matter (particles), and 91% is present as x-rays.

This matter-radiation energy distribution occurs because of the Stefan-Boltzmann radiation law, whereby the amount of energy in radiation increases very rapidly as temperature increases: the radiant power is proportional to the fourth power of temperature. Dr Bridgman comments on page 3:

'Thus in temperature regions where the radiation constitutes a large fraction of the energy present, added yield appears mostly as additional radiation and results in only a fourth root increase in temperature. ... In summary, the presence of nuclear radiation from the nuclear reactions themselves, and even more important, the presence of electromagnetic radiation arising from the plasma nature of the exploded debris, make the nuclear explosion unlike a chemical explosion and like the interior of a star.'

Obviously, because of the small mass of a nuclear weapon fireball compared to the immense gravitating mass of the sun, gravitation cannot confine the nuclear weapon fireball as it confines the sun, so the former is able to explode, due to lack of gravitational confinement.

On page 5, Dr Bridgman tabulates physical conversion factors for nuclear weapons effects:

1 cal = 4.186 J
1 bar = 100 kPa
1 kbar = 100 MPa
1 atmosphere = 1.013 bars
1 eV = 1.602*10-19 J
1 kt = 1012 cal

Page 6 is more interesting and gives the formula (equation 1-1) for the energy density of electromagnetic radiation in space as a function of electric and magnetic field strengths (albeit with an error, the term for magnetic energy density should be (1/2)*(mu_0)*H2 or (1/2)*(1/mu_0)*B2, but not (1/2)*[(mu_0)*H]2 as printed, where mu_0 is the magnetic permeability of the vacuum, H is magnetic field strength and B is magnetic flux density, B =(mu_0)*H).

Bridgman then discriminates the electromagnetic spectrum into classical (Maxwellian continuous electromagnetic waves) and quantum waves by suggesting that waves of up to 1016 Hz are classical Maxwellian waves, and those of higher frequency are quantum radiation. This is interesting because the mainstream view generally in physics holds that the classical Maxwell radiation is completely superseded by quantum theory, and is just an approximation.

It's always interesting to see classical radiation theory being defended for use in radio theory (long wavelengths, low frequencies) as still a valid theory. If classical and quantum theories of radiation are both correct and apply to different frequencies and situations, this contradicts the mainstream ideas. For example, is radio emission - by a large ensemble of accelerating conduction electrons along the surface of a radio transmitter antenna - physically comparable to the quantum emission of radiation associated with the leap of an electron between an excited state and the ground state of an atom? It's possible that the radio emission is the Huygens summation of lots of individual photons emitted by the acceleration of electrons along the antenna due to the applied electric field feed, but it's pretty obvious that when analyze an individual electron being accelerated and thereby induced to emit radiation, you will get continuous (non-discrete) radiation if an acceleration is continuously applied as an oscillating electric field intensity, but you will get discrete photons emitted by electrons if you cause the electrons to accelerate in quantum leaps between energy states.

From quantum field theory, it's clear as Feynman explains in his book QED (Princeton University Press, 1985; see particularly Figure 65), the atomic (bound) electron is endlessly exchanging unobserved (virtual) photons with the nucleus and any other electrons. This exchange is what produces the electromagnetic force, and because the virtual photons are emitted at random intervals, the Coulomb force between small (unit) charges is chaotic instead of the smooth classical approximate law derived by Coulomb using large numbers of charges (where the quantum field chaos is averaged out by large numbers, like the way that the random ~500 m/s impacts of individual air molecules against a sail are averaged out to produce a less chaotic smoothed force on large scales).

Therefore, in an atom (or very near other charges in general) the electrons move chaotically due to the chaotic exchange of virtual photons with the nucleus and other charges like other electrons, and when an electron jumps between energy levels in an atom, the real photon you see emitted is just the resultant energy remaining after all the unobserved virtual photon contributions have been subtracted: so the distinction between classical and quantum waves is physically extremely straightforward!

Bridgman then gives a discussion of quantum radiation theory which is interesting. Max Planck was guided to the quantum theory of radiation from the failure of the classical theories of radiation to account for the distribution of radiant emission energy from an ideal (black body or cavity) radiator of heat as a function of frequency. One theory by Rayleigh and Jeans was accurate for low frequencies but wrongly predicted that the radiant energy emission tends towards infinity with increasing frequency, while another theory by Wien was accurate for high frequencies but underestimated the radiant energy emission at low frequencies. There were several semi-empirical formulae proposed by mathematical jugglers to connect the two laws together so that you have one equation that approximates the empirical data, but only Planck's theory was accurate and had a useful theoretical mechanism behind it which made other predictions.

There was general agreement that heat radiation is emitted in a similar way to radio waves (which had already been modelled classically by Maxwell in 1865): the surface of a hot object is covered by electrically charged particles (electrons) which oscillate at various frequencies and thereby emit radiation according to Larmor's formula for the electromagnetic emission of radiation by an accelerating charge (charges are accelerating while they oscillate; acceleration is the change of velocity dv/dt).

The big question is what the distribution of energy is between the different oscillators. If all the oscillators in a hot body had the same oscillation frequency, we would have the monochromatic emission of radiation which would be similar to a laser! Actually, that does not happen normally with hot bodies where you get a naturally wide statistical distribution of oscillator frequencies.

However, it's best to think in these terms to understand what is physically occurring behind Planck's equation for the distribution, although this was first understood not by Planck in 1901 but by Einstein in 1916 when Einstein was studying the stimulated emission of radiation (the principle behind the laser). In a hot object, the oscillators are receiving and emitting radiation.

Radiation received by an oscillator from adjacent oscillating charges can either cause that oscillator to emit stimulated (laser like) radiation of the same frequency as the radiation that the oscillator receives, or alternatively it can cause the oscillator to emit radiation spontaneously.

What Einstein realized was that the probability that an oscillator will undergo the stimulated emission of radiation is proportional to the intensity (not the frequency) of the radiation, whereas the probability that it will emit radiation spontaneously is independent of the intensity of the radiation. For the thermal equilibrium of radiation being emitted from a black body cavity, the ratio for an oscillator of the:

(stimulated radiation emission probability) / (spontaneous radiation emission probability) = 1/[ehf/(kT) -1]

This formula is Planck's radiation distribution law, albeit without the multiplier of 8*Pi*h*(f/c)3. Notice that 1/[ehf/(kT) - 1] has two asymptotic limits for frequency f:

(1) for hf >> kT, the exponential term in the denominator becomes large compared to the subtracted number of 1, so we have the approximation: 1/[ehf/(kT) - 1] ~ ehf/(kT).

(2) for hf << kT, the approximation ex = 1 + x is accurate for small x, which gives: 1/[ehf/(kT) - 1] ~ 1/[1 + (hf/(kT)) -1] = kT/(hf).

The energy E = hf is Planck's quantum energy, where f is frequency. The energy E = kT is the classical relationship between temperature and emitted energy.

Spontaneous emission of radiation predominates in black body radiation where the ration of hf/(kT) is high, i.e. for high frequencies in the spectrum, while more laser-like stimulated emissions are predominant for low frequencies. This is because the intensity of the radiation is highest at the lower frequencies, causing a a greater chance of stimulated emission.

So Planck's blackbody radiation spectrum law is a composite of two different things:

(1) the distribution of intensity of radiation (which is greatest for the lowest frequencies and falls for higher frequencies)

(2) the distribution of energy as a function of frequency, which is not merely dependent upon the intensity as a function of frequency, but also depends on the photon energy as a function of frequency, which is not a constant! Since Planck uses E = hf, the energy carried per quantum increases in direct proportion to the frequency, which means that the energy-versus-frequency distribution differs from the intensity-versus-frequency distribution. The intensity (rate of photon emission) falls off with increasing energy, but the energy per unit photon increases according to E = hf, so the energy-versus-frequency distribution is different from the intensity-versus-frequency distribution.

Really, to understand the mechanism behind the quantum theory of radiation, you need to have graphs not just Planck's energy-versus-frequency distribution law, but additional graphs showing the underlying distribution of oscillator frequencies in the blackbody which determine the energy emission when you insert Planck's E = hf law.

I.e., Planck argued that a black body with N oscillators (radiation emitting conduction electrons on the surface of the filament of a light bulb, for instance) will contain Xe-E/(kT) oscillators in the ground state with E = hf = 0 (i.e. X oscillators are not emitting any radiation), Xe-2E/(kT) = Xe-2hf/(kT) in the next highest state, Xe-3E/(kT) = Xe-3hf/(kT) in the state after that, and so on:

N = X + Xe-2hf/(kT) + Xe-3hf/(kT) + ...

This gives you the distribution of intensity as a function of frequency f.

Planck then argued that the relative energy emitted by each oscillator is given by multiplying each term in the expansion by the relevant energy per unit photon, e.g., E = hf, E = 2hf, E = 3hf:

E(total) = hfX + 2hfXe-2hf/(kT) + 3hfXe-3hf/(kT) + ...

The ratio of [E(total)]/N is the mean energy per quantum in black body radiation, and by summing the two series and dividing the sums we find:

Mean energy per photon in blackbody radiation, [E(total)]/N = hf/[ehf/(kT) - 1].

Planck's radiation law is:

Ef = (8*Pi*f2/c3)*[mean energy per photon in blackbody radiation]

Therefore it is comforting to see that the complexity of the Planck distribution is due to the average energy per photon being hf/[ehf/(kT) - 1], and apart from this factor, the law is really very simple! If the average intensity per photon was constant (independent of frequency), then the radiation law would be that the energy per unit frequency would be proportional to the square of the frequency. This of course gives rise to the "ultraviolet catastrophe" of the Rayleigh-Jeans law, which suggests that you get infinite energy emitted at extremely highly frequencies (e.g., ultraviolet light). Planck's radiation law shows that the error in the Rayleigh-Jeans law is that there is actually a variation, as a function of frequency, of the mean energy of the emitted electromagnetic waves.

The mean photon energy hf/[ehf/(kT) - 1] has two asymptotic limits for frequency. For hf >> kT, we find that hf/[ehf/(kT) - 1] ~ hfe-hf/(kT), and for hf << kT, we find that hf/[ehf/(kT) - 1] ~ kT. Therefore, at high frequencies, Planck's law E = hf controls the blackbody radiation with spontaneous emission of radiation. This gives an average energy per photon of hfe-hf/(kT) at high frequencies. But at low frequencies, stimulated emission of radiation predominates and the average energy per photon is then E = kT.

It's a tragic shame that the Planck distribution law is not presented clearly in terms of the mechanisms behind it in popularizations of physics. To make it clearly understood, you need to understand the two mechanisms for radiation involved (spontaneous emission which predominates at the low intensities accompanying the high frequency component of the blackbody curve, and stimulated laser-like emission which predominates at the high intensities which accompany the low frequency part of the curve), and you need to understand that intensities are highest at the lower frequencies because there are more oscillators with the lower frequencies than higher ones. The reason why the energy emitted at any given frequency does not follow the intensity law is the variation in average energy per photon as a function of the frequency. By plotting a graph of the number of oscillators as a function of frequency and another graph of the mean energy per oscillator as a function of frequency, it is is possible to understand exactly how the Planckian distribution of energy versus frequency is produced.

Sadly this is not done in any physics textbook or popular physics book I've seen (and I've seen a lot of them), which just give the equation and an energy-versus-frequency graph and don't explain the mechanism for the events physically occurring in nature that give rise to the mathematical structure of the formula and the graph! I think historically what happened was that Planck guessed the law from a very ad hoc theory around 1900, publishing the initial paper in 1901 but then around 1910 Planck improved the original theory a lot to a simple theory of statistics for a resonators with discrete oscillating frequencies, yet the actual mechanism with the spontaneous and stimulated emissions of radiation contributing was only established by Einstein 1916. So textbook authors get confused and over-simplify the facts by ignoring the well-established physical mechanism for the blackbody Plankian radiation distribution. In general, most popular physics textbooks are authored by mathematical fanatics with a false and dogmatic religious-type ill-founded belief that physical mechanisms don't occur in nature, and that by eradicating all physical processes from physics textbooks the illusion can be maintained that nature is mathematical, rather than the reality that the mathematics is a way of describing physical processes. The problem with the more abstract mathematical models in physics is that they are just approximations that statistically work well for large numbers, and you get into trouble if you don't have a clear understanding of the distinction between the physical process occurring and the way that the equation works:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ - R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Compton effect

Bridgman gives a nice discussion on pages 17-18 of the Compton effect, which is a particle-type (rather than wave-type) interaction similar to a billiard ball collision. A gamma ray or X-ray hits an electron, scattering and imparting momentum (thus kinetic energy) to it, while a new "scattered" gamma ray (of lower energy than the incident gamma ray) moves off at an angle, like a billiard ball hitting another, imparting some energy to it and scattering off at an angle itself with reduced energy. Compton scattering is therefore described quite simply if the electrons are free like billiard balls. In reality, of course, most electrons are usually bound to atoms, but if the binding energy of the electron to the atom is much smaller than the energy of the incoming gamma ray, then it is a good approximation to ignore the binding energy and treat the electron as if it were free.

Photoelectric effect

Bridgman explains on p. 19 that when a bound electron is absorbed by a photon (rather than "scattering" from it as in the Compton process), the electron will be ejected emitted if the energy of the photon exceeded the binding energy of the electron to the atom. The energy the electron will have will be the energy of the incident photon minus the binding energy of the electron to the atom. This is the phoetelectric equation of Einstein, 1905. Obviously, Einstein's equation is just approximate because the impact of the photon will not merely affect the electron (as he assumed); some of the impact energy will also be passed on via Coulomb field interactions to the nucleus and thence the rest of the material. However, because momentum is conserved and the electron is nearly two thousand times less massive than the nucleus, the impact motion induced in the nucleus will be nearly two thousand times less than that induced in the electron, so that the vast majority of the kinetic energy will remain with the electron instead of being passes on to the nucleus and the rest of the material. So Einstein's photoelectric effect equation is a very good approximation to the real, more complicated, physical dynamics.

Pair-production

Bridgman discusses pair-production on pp. 20-21. If a gamma ray of energy exceeding the rest mass equivalent of two electrons passes through an electric field of strength 1.3*1018 v/m or more (i.e., at 33 femtometres from the centre of an electron/proton, or closer), the quanta in the electric field are intense enough to potentially interact with the field of the gamma ray and thereby decompose it into two opposite electric charges, each of which acquires a mass from the vacuum "Higgs field" (or whatever field will be discovered to contribute mass - i.e. gravitational charge - to fermions). This threshold field strength for pair-production was derived by Julian Schwinger (Schwinger’s critical threshold for pair production is Ec = m2c3/(e*h-bar). Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040, which corresponds to the limiting range out to which the vacuum contains virtual fermionic annihilation-creation spacetime loops, which polarize themselves radially around real charges like a capacitor's dielectric material, and thus shield part of the charge of the electron, causing the "running couplings" in QFT and the attendant need to renormalize electric charge which appears stronger at small distances where there is less shielding.

(It's fascinating that Schwinger's threshold field strength required for pair production - vital for the IR cutoff in QFT - is physically being totally ignored in all the popular books on QFT, QM, and Hawking radiation. E.g., Hawking radiation is supposed to be gamma ray emission resulting from interactions after spontaneous pair production in the vacuum near the event horizon R = 2GM/c2 of a black hole, but when you take account of Schwinger's threshold it turns out that you will only get Hawking radiation if the black hole has an electric charge proportional to the square of the mass of the black hole! Big uncharged black holes can't physically radiate any Hawking radiation. However, fundamental charged particles are extremely efficient Hawking radiators and a corrected form of the Hawking radiation mechanism will physically explain the emission and thus exchange of electromagnetic field quanta by fundamental particles.)

A proper theory of pair-production will explain how bosonic energy acquires rest mass when it becomes fermionic energy, and this isn't a part of the Standard Model of particle physics (mass is described by various types of problematic "Higgs fields" in the existing Standard Model, none of which have been detected, and all of which are ad hoc epicycles, which don't contribute anything to the predictive power of the Standard Model; there's no evidence for electroweak symmetry and the Weinberg mixing angle for the neutral electromagnetic and weak field gauge bosons is totally ad hoc and doesn't specifically require a Higgs field, or prove that the two fields are unified in the way expected at high energy).

Bohr model of the atom

Bridgman deals very nicely with the Bohr atom on pages 22-29. J. J. Thomson "discovered" (or at least measured a fixed charge-to-mass ratio, for cathode rays) the electron in 1897, and then developed a theory of the atom as a mixed pudding of positive and negative charges. He argued that there could not be a separation of charges within the atom, because that would make the atom unstable and liable to collapse. However, it's hard to see how a mixture of positive and negative charges will be more stable. Rutherford settled the matter by having two research students, Geiger and Marsden, fire alpha particles through thin gold foil and measure the angles of scatter. Some of the alpha particles were scattered back towards the source, and from the distribution of scattering angles Rutherford was able to deduce that the simplest working hypothesis that fitted the data was a central positively charged massive nucleus surrounded by the negatively charged electrons.

Bohr then suggested that the electrons orbit the nucleus rather like planets orbiting the sun, but with the Coulomb attraction of negative and positive charge replacing gravitation. Hence, for hydrogen atoms Bohr set the Coulomb force between an electron and a proton equal to the centripetal acceleration force, F = -mv2/r. Rearranging the result allowed the orbital speed of the atomic electron to be deduced, v = (e2/{4*Pi*permittivity*MR})1/2 where M is the electron's mass and R is the radius of the orbit. The linear momentum is then p = Mv, the angular momentum is L = pR, and the kinetic energy is E = (1/2)Mv2.

Bridgman points out in on page 24 that:

"Bohr's unique contribution was his postulate that the electron's angular momentum, L, could take only discrete values [integer multiples of L = pR = n*h-bar, where n = 1, 2, 3, etc.]."

This leads to the correct quantization, which for the total (potential plus kinetic) electron energy gives rise to the line spectra formulae for the wavelengths of light emitted by atomic electrons, such as the Lyman, Balmer, and Paschen series formulae.

Bridgman adds in an enlightening footnote on that page:

"At first exposure one is tempted to ask the question 'How on earth did Bohr come to that conclusion? Why not discrete linear momentum, or energy, or etc.?' Certainly the answer is not obvious from our foreshortened discussion. The actual historical logic can be found in Jammer where it can be seen that the correct postulate came only after several unsuccessful ones."

Rutherford rejected Bohr's hypothesis because it failed to explain why the acceleration of the orbiting electron did not cause it to continuously radiate energy as electromagnetic waves, and thus slow and spiral into the nucleus and oblivion within a second. Rutherford wrote to discourage Bohr:

“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”

- Ernest Rutherford's letter to Niels Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)

Bohr never came up with an mechanism that explained the failure of classical electromagnetism; instead he worked out a Mach-type "positivist" philosophy of against asking awkward questions of models that make accurate predictions (which Ptolemy's epicycle followers had hundreds of years earlier used to try to suppress Copernicus), the complementary and correspondence principles which Einstein attacked at the Solvay Congress of modern physics in 1927 and thereafter. According to Bohr, nature corresponds to classical physics on large scales here the action is much bigger than Planck's constant, and to quantum mechanics on small scales where the action is on the order of Planck's constant. Wave descriptions of matter complement rather than contradict particle descriptions, and we must religiously believe in his dogma that there is no possibility of reconciling classical and quantum physics; we must believe that nature has discontinuities and must not ask questions or try to find answers, because it is a waste of time. (This is like the false belief in the 19th century - before stellar line spectra were detected - that nobody would ever know the composition of stars, because they are too hot and too far away to investigate.)

Bohr therefore opposed quantum field theory in the modern second quantization form of Feynman's path integrals (central to the Standard Model today) at the 1948 Pocono conference:

" ... Bohr ... said: '... one could not talk about the trajectory of an electron in the atom, because it was something not observable.' ... Bohr thought that I didn't know the uncertainty principle ... it didn't make me angry, it just made me realize that ... [ they ] ... didn't know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up ..."

- Richard P. Feynman, in Jagdish Mehra, The Beat of a Different Drum, Oxford, 1994, pp. 245-248. (For the story of how Dyson and Bethe overcame hostility and forced the scientific community to lower their guard against path integrals, see Dyson's YouTube video linked here.)

Feynman completely debunks the uncertainty principle (first quantization) quantum mechanics philosophy in his 1985 book QED:

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas ... But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when ...” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

- Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56 (footnote).

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn't enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

In QED, the explanation for the question, "why doesn't the orbiting electron radiate and spiral into the nucleus?" is simply: equilibrium between emission and reception. The orbital electron is radiating intensely, but because there is a well-established equilibrium between the intense rate of emission and the intense rate of reception, the radiation appears as invisible "field quanta" to us instead of doing work (e.e., it doesn't make any electrons jump energy levels!). Whenever you have an equilibrium of emission and reception, you just have a zero point field. If you think of the motion of air molecules hitting you, it is not possible to extract that particle energy usefully to do work. Air pressure exerts a force, but it's not possible to get useful work out of it. The vacuum field of virtual photons or field quanta is similar; electrons are bombarded on all sides and there is no useful net work done, no net electric current or anything. So the radiation from a lot of orbiting electrons soon creates an equilibrium and once the electrons are receiving as much radiant power in virtual photons as they radiate, they have attained a "ground state", ceasing to spiral into the nucleus because there is no longer any useful work being done on them to push them in towards the nucleus.

Bridgman then explains the various types of radiation emission from the nucleus, alpha particles (which escape from the nucleus as stable configurations by "quantum tunnelling" through the quantized binding field), beta particles (which have a continuous energy spectrum with a mean energy of usually one-third of the total energy released in beta decay, the remainder of the energy being carried by an antineutrino), and gamma rays (which are released from the nucleus in discrete energies which suggests a shell structure for the nucleons in the nucleus, analogous to Bohr's explanation of the line spectra of light from atomic electrons).

Chapters 2, 3, and 4 (Fission Explosives: Neutronics, Fission Explosives: Thermodynamics, and Fusion Explosives)

As stated, I will skip a detailed discussion of these chapters which go further than declassified reports like Glasstone and Redman's declassified 1972 introduction to nuclear weapons, here and here, LAMS-2532, Vol. 1, LA-1006, and LA-1.

Neglecting the detailed calculations of nuclear weapon design and prediction of efficiency, the general physics of how a chemical explosive like TNT compresses a solid metal uranium or plutonium core, is of interest because it indicates the minimum possible size for a spherical implosion nuclear weapon. Smaller sizes for cannon shells can be achieved by gun-type assembly, or by linear implosion where a piece of fissile material is simply compressed in one dimension rather than in three dimensions; these designs are both less efficient than spherical implosion but are necessary to fit a nuclear weapon into a small-diameter cannon shell.

On page 121, Bridgman gives curves from M. van Thiel's Compendium of Shock Data (Livermore Radiation Lab., report UCRL-50108, v. 1, June 1966) which show how much pressure is needed to achieve given increases in the density of metallic uranium and (alpha phase) plutonium. Doubling the density of plutonium metal requires 4.9 Mbar, and 10 Mbar is needed to do the same to uranium. Doubling the density will shrink the radius by a factor of 21/3 = 1.26, so a 6.2 kg solid plutonium core (the Nagasaki bomb had 6.2 kg of plutonium, but it was not solid since it had an initiator in the centre) shrinks from 4.2 cm radius to just 3.3 cm radius, a reduction of 0.9 cm due to compression.

Work energy, E = Fx = PAx = (4.9*106*105)*(4*Pi*(0.0422))*(0.009 metres) = 9.8*107 J. [This calculation is mine, a back-of-the-envelope estimate and is not based on the detailed numerical calculations of implosion in Bridgman's book; it is not accurate for bomb design just to give an indication of what kind of mass is needed so that the bulk and mass of the terrorist threat can be seen.]

This is the amount of implosion energy needed to double the density of a 6.2 kg solid plutonium core. However, Newton's 3rd law tells us that you can't make a force act in one direction; you always get an equal and opposite reaction force. So in implosion, only 50% of the force of the TNT explosion goes inward as a shock wave to compress the core (the rest acts outwards).

Since TNT produces 4.2*1012 J per kt, for the 50% efficiency suggested by Newton's 3rd law you need 46 kg of TNT to compress a plutonium core to double density.

However, this calculation omits the fact that the core isn't all compressed simultaneously. As the shock wave from the TNT reaches the core, it compresses first the outside of the core, and gradually compresses more of the core as it progresses inward, taking something like 8 microseconds to reach the middle and rebound. So the implosion process is extremely complicated to model and requires sophisticated computer calculations. Adding a natural uranium or lead tamper around the core might seem like a good idea to delay the expansion of the core and allow a fission chain reaction to spontaneously set off a nuclear explosion, but that is wrong: it adds more mass to the bomb, and requires more TNT to compress, and the extra force of the rebounding implosion wave when it reaches the centre will disassemble the core just as before.

This indicates the minimum size and mass of an efficient implosion weapon, which might be smuggled in by terrorists. It won't fit into a suitcase. It does not even indicate that terrorists can produce a nuclear weapon from so much plutonium and TNT, because you have to inject neutrons to start the chain reaction during the few microseconds it takes for the implosion shock wave to travel through the core, compressing it. Once the shock wave reaches the centre of the core a few microseconds after reaching it (at velocities on the order of 10 km/s after entering the core), it rebounds and soon causes the core to expand and become subcritical again. The requirement for bomb design - even for the simplest device - is complex and involves detailed calculations and design to ensure that the the neutron chain reaction is initiated at the right time after the TNT has been detonated simultaneously at many points around the surface.

It shows that the theft is plutonium by itself is not a particularly great threat: terrorists would need a great deal more technology to make a nuclear weapon from a piece of plutonium than to simply use conventional explosives like TNT for terrorism. The main threat is therefore rogue states which can afford to invest heavily in the technology and research required to make neutron initiators correctly linked to the electrical firing system and simultaneous detonators of the TNT system.

Miniaturization technology like beryllium neutron reflectors and tritium boosting are so expensive that - regardless of what informatin they had on the subject - these improvements would not be available to terrorists, so any terrorist nuclear weapon which posed a massive threat would itself be large in physical size and therefore difficult to deliver, not at all the "suitcase bomb."

It would be easier for terrorists to use conventional chemical explosives than nuclear explosives. Only with a massive investment in laboratory technology and research could the firing and neutron initiation system for a threatening nuclear weapon be produced.

Injecting 1 gram of tritium gas into the hollow core of an 83 kt fission weapon "boosts" its yield by a factor of about 50% to 124 kt, of which 0.135 kt is due to the fusion of tritium and nearly 41 kt is due to additional neutrons induced in the core material by the fusion neutrons. This technology is beyond most nations; even America finds tritium production a costly business. The nuclear reaction cross-sectional area for the fusion of tritium + deuterium into helium and a neutron plus 17.6 MeV of energy is roughly 100 times higher than the cross-section for deuterium + deuterium fusion, so the tritium + deuterium reaction (which produces 1.49*1024 neutrons/kt and 80.6 kt/kg of energy when fused) is all-important in thermonuclear weapons. This requires the use of lithium deuteride capsules which are ablated by X-rays in the Teller-Ulam system, which again requires elaborate laboratory technology plus very sophisticated three dimensional calculations using computers for research and development, which we need not discuss.

Bridgman discusses the temperature and partition between X-rays and case shock (kinetic) energy in the exploding bomb using a similar treatment to Brode's 1968 published article in the Annual Review of Nuclear Science, vol. 18, but in greater depth. The total energy at explosion time is the sum of X-ray energy aVT4 (where a is the radiation constant which is 4/c times the Stefan-Boltzmann constant, V is volume, and T is temperature) and material energy MCT (where M is mass, C is the specific heat capacity of that mass at constant volume which is 3R/2 for a perfect solid or 3R = 0.02494 cal/(g*K) for an ideal gas such as the highly ionized bomb vapours, and T is temperature): E = aVT4 + MCT. For heavy inefficient nuclear weapons (such as a terrorist improvised device) very little energy is emitted in X-rays by this formula, so it can't be used to initiate a Teller-Ulam thermonuclear reaction. A bomb temperature of 11.6 million K corresponds to X-ray radiation quanta of 1 keV. DNA-EM-1 considers X-ray energies from 1 keV to 10 keV for modern nuclear weapons, corresponding to peak bomb temperatures of 11.6 to 116 million K. Only for a high yield-to-mass ratio is there a large proportion of the yield emitted in X-rays which can initiate a Teller-Ulam thermonuclear charge reaction.

X-Ray Effects

In the chapter on X-ray effects, page 197, Bridgman states that a weapon with a several cm thick dense outer casing and a yield of a few kilotons will be relatively "cold" with an X-ray radiating temperature of 1 keV or less. However, if the outer casing is thin and of lower atomic number, it can be fully ionized and can emit X-rays with a mean energy of several keV. Efficient megaton yield weapons can emit 10 keV X-rays. These X-rays can be used to pump an X-ray laser or to indiscriminately ablate, deflect and destroy re-entry vehicles in outer space over a wide volume during a concentrated nuclear attack. X-ray weapons for high altitude use need special design to minimise the fission yield and the prompt gamma ray output (including that due to inelastic neutron scatter in the case), or there can be substantial damaging EMP effects at ground level.

We mentioned in an earlier post that:

"Ablation can be explained very simply and is very well understood because it's the mechanism by which fission primary stages ignite fusion stages inside thermonuclear weapons: 80% of the energy of a nuclear explosion is in X-rays and the X-ray laser would make those X-rays coherent and focus some of them on to the metal case of an incoming enemy missile. The result is the blow-off or 'ablation' of a very thin surface layer of the metal (typically a fraction of a millimetre). Although only a trivial amount of material is blown off, it has a very high velocity and carries a significant momentum. The momentum isn't immense but it creates a really massive force on account of the small time (about 10 nanoseconds) over which it is imparted (this is because force is the rate of change of momentum, i.e. F = dp/dt), and since pressure is simply force per unit area, you get an immense pressure due to Newton's 3rd law of motion (action and reaction are equal and opposite, the rocket principle).

"Hans Bethe and W. L. Bade in their paper Theory of X-Ray Effects of High Altitude Nuclear Bursts and Proposed Vehicle Hardening Method (AVCO Corp., Mass., report RAD-TR-9(7)-60-2, April 1960) proposed that missiles can be hardened against X-ray induced ablative recoil by using a layer of plastic foam to absorb reduce the force within the missile by spreading out the change of momentum over a longer period of time, but although this will protect some internal components from shock damage, the missile skin can still be deflected, dented and destroyed by ablation recoil."

Bridgman's book quantifies the X-ray ablation effect on pp. 212-5:

"The energy deposited by X-ray absorption occurs in a very short time, essentially the time duration of the x-ray pulse, perhaps several shakes [1 shake = 10 nanoseconds] for the direct x-rays. because of inertia, the target material will not significantly expand, contract or translate in such a short time. Thus, the energy deposited can be regarded as an instantaneous increase in material internal energy."

He considers a graphite (carbon) heat shield exposed to 10 cal/cm2 of 4 keV x-rays. The sublimation energy (energy to vaporize a solid directly) of pyrolytic carbon is 191 cal/gram = 800 kJ/kg. The 10 cal/cm2 deposits 1.54 MJ/kg of energy on the front surface of the carbon, so it vaporizes and "blows off" the surface. For a fluence of 10 cal/cm2, the 4 keV x-ray vaporization extends to an effective depth of 81 microns in the pyrolytic carbon. (For double that x-ray fluence, i.e. 20 cal/cm2, the depth of surface blow-off will be increased by a factor of 1/lne2 ~ 1.44.) On p. 213, Bridgman explains:

"... the vaporized material is often referred to as a blow-off ... there is a rocket exhaust-like momentum which is discharged in a very short time. A time rate of change of momentum to the left of the front surface must be balanced by an equal and opposite time rate of change of momentum or pressure to the right into the shield. This equal and opposite pressure becomes a shock wave into the solid material ..."

Bridgman computes the kinetic energy of the blow-off in the example (for 10 cal/cm2 of 4 keV x-rays striking pyrolytic carbon) to be 58.9 kJ/m2, corresponding to a blow-off velocity of 813 m/s. Assuming that the x-ray pulse lasts 20 ns, the ablation recoil force will be on the order of F = dp/dt ~ mv/t implying an immense pressure of 72 kbar or 72,000 atmospheres.

Bridgman adds that a 4 keV x-rays fluence of 5 cal/cm2 deposits just under the sublimation energy of the carbon shield, so it will not be able to cause any blow-off, but it will still deposit energy in the outer 0.5 mm of the shield, and impart momentum, producing a peak surface pressure on the shield of 14.9 kbar or about 15,000 atmospheres.

Thermal Effects

Bridgman gives theoretically calculated thermal transmission values for surface explosions which are considerably smaller than in Glasstone and Dolan and other sources. He makes it clear in Fig. 6-1 on p. 237 that the thermal yield is maximised for a burst altitude of about 47 km, where it is 64% of total yield for a 100 kt device. For the same device detonated at sea level it is 35% (with the remainder in blast and nuclear radiation) and for the same device detonated above 75 km it is 25% (with the remainder emitted as X-rays and nuclear radiation).

Bridgman on page 247 calculates that the fireball surface radiating temperature at the time that the shock wave departs from it ("breakaway" time according to Glasstone and Dolan) decreases from 300,000 K for a sea level burst to 75,000 K for a burst altitude of 20 km. This occurs at 3.13Wkt0.44 ms after a W kt burst at sea level (Bridgman quotes this formula from page 233 of Northrop's book).

Blast Effects

Bridgman derives Glasstone and Dolan's "Rankine-Hugoniot equations" for idea shock fronts on pages 281-4. He gives a graph of Mach stem heights (not included in Glasstone and Dolan, but included in the Capabilities series from 1957 onwards) on page 293. On pages 295-297 he quotes research by Charles Needham on the correlation of nuclear test data on blast from high yield devices, which showed that you get a natural reduction in peak pressures from megaton yield devices because the blast wave energy refracts upwards into the lower density air at higher altitudes where the shock radius is on the order of the 4.3 miles or 6.9 km scale height of the atmosphere (the height at which sea-level air density falls by a factor of e = 2.718):

"During the research which went into the 1 kt nuclear blast standard, we looked very carefully at the blast data from the Pacific as well as from Nevada. We found that the majority of measured pressures from the Pacific data, whether at ground level or from airborne gauges, did not cube root scale to the same pressure versus radius curve that the Nevada Test Site data did. We found that the multimegaton data consistently fell below the calculated curves and the NTS data which agreed with the one-dimensional calculations. Further, we found that the data from small yields shot in the Pacific (there were a few) did agree with the NTS data. More sophisticated two-dimensional calculations confirmed that as the shock radius became an appreciable fraction of the scale height in the atmosphere, more energy went up than out."

Bridgman then gives an analysis of blast gust loading on aircraft. (On p. 495, he also points out that thermal radiation can also be important for aircraft metal skins which can melt at 580 C and can only safely take a temperature of 204 C, corresponding to a 20% change in skin elasticity.) He then gives an analysis of blast loading on buildings. He considers a building with an exposed area of 163 square metres, a mass of 455 tons and natural frequency of 5 oscillations per second, and finds that a peak overpressure of 10 psi (69 kPa) and peak dynamic pressure of 2.2 psi (15 kPa) at 4.36 km ground range from a 1 Mt air burst detonated at 2.29 km altitude, with overpressure and dynamic pressure positive durations of 2.6 and 3.6 seconds, respectively, produces a peak deflection of 19 cm in the building about 0.6 second after shock arrival. The peak deflection is computed from Bridgman's formula on p. 304: deflection at time t,

xt = [A/(fM)]{integral symbol}[sin(ft)](Pt + CDqt)dt metres,

where A is the cross-sectional face-on area of the building facing to the blast (e.g., 163 square metres), f is the natural frequency of oscillation of the building (e.g., 5 Hz), M is the mass of the building, Pt is the overpressure at time t, CD is the drag coefficient of the building to wind pressure (CD = 1.2 for a rectangular building), and qt is the dynamic pressure at time t. (There is a related calculation of the peak deflection of a structure on pages 250-284 of the 1957 edition of the Effects of Nuclear Weapons.) Bridgman points out that this equation ignores:

(1) the fact that the net force from the overpressure suddenly ends once the shock front has engulfed the building and is pressing on the rear side with a similar pressure to that that on the front side, and

(2) the end of the building oscillations due to energy loss from causing damage or destruction of the walls and other components of the building.

The effect of these limitations can easily be incorporated into the model by (1) calculating the time taken for the shock front to transverse the length of the building, and (2) using nuclear test data to indicate the peak pressure associated with a given degree of damage or destruction (this allows the amount of deflection of walls to be correlated to the probability that the wall fails).

This 19 cm computed maximum deflection allows us to estimate how much energy is permanently and irreversibly absorbed from the blast wave by a building and transformed into slow-moving (relative to the shock front) debris which falls to the ground and is quickly stopped after the blast has passed it by: E = Fx, where F is force (i.e., product of total pressure and area) and x is distance moved in direction of force due to the applied force from the blast wave. If the average pressure for the first 0.5 second is equal to 12 psi (83 kPa) then the average force on the building during this time is 13 million Newtons, and the energy absorbed is:

E = Fx = 13,000,000*0.19 = 2.6 MJ.

This is interesting because we have already discussed earlier the problem that Penney found a large attenuation in peak overpressures due to the irreversible energy loss via damage done at Hiroshima and Nagasaki. Although you might expect some overpressure to diffract downwards as the energy is depleted near ground level, the effect of the fall in air density with increasing altitude will tend to prevent this. In any case, only blast overpressure diffracts. Dynamic pressure is a directional (radial) wind effect which does not diffract downwards. Hence, blast energy loss from the wind (dynamic) pressure cannot be compensated for by downward diffraction. This is why shallow open trenches provided perfect protection against wind drag forces at nuclear tests in the 1950s, although the overpressure component of the blast did diffract into them: the wind just blows over the top of the trench without blowing down into it!

Initial Nuclear Radiation

Bridgman discusses the neutron output spectra given by Glasstone and Dolan (1977), which are of course simplified from more detailed data in Dolan's formerly classified manual, EM-1. The pure fission weapon output indicates that 50% of the neutrons available escape and therefore 50% are captured in the weapon debris. For the typical thermonuclear weapon, fewer neutrons escape. Prompt gamma rays are not produced by fusion, but can be produced when neutrons are inelastically scattered by some nuclei, exciting nucleons within those nuclei to a high energy state.

Residual Radiation

Page 401 stated that the mass of fallout produced by a surface burst varies from 800 tons/kt for 1 kt to 300 tons/kt for 1 Mt total yield. Bridgman presents the details of the fallout particle-size distribution, cloud rise, diffusion and deposition as mathematical models.

The book then goes into the biological effects radiation. Animals are approximately 70% water, so most of the radiation interactions in the body are related to the ionization of water molecules by radiation. Water molecules, H2O, when ionized form H+ ions and OH- ions. At low dose rates the rate at which these are produced is small, so there are unlikely to be two nearby. At higher dose rates, it is more likely that there will be nearby ions, so mixed-up recombination can form molecules like the oxidising agent hydrogen peroxide, 2OH -> H2O2, which is a chemical poison in high concentrations. Cell nuclei contain chromosomes consisting of DNA molecules. Genes are sections of DNA which carry the instructions for producing a particular protein molecule. Protein molecules in the nucleus work as enzymes, repairing damage to DNA and controlling cellular processes like division. Eggs are examples of single cells. Bridgman discusses only the basic physical processes involved in the biological effects of radiation, and does not evaluate all of the mechanisms and experimental evidence for non-linear dose -effects response in long-term effects.

It would be good if the book included a look at some of the ways that radiation damage can be prevented or reduced by harmless natural vitamins and minerals. According to the March 1990 U.S. Defense Nuclear Agency study guide DNA1.941108.010, report HRE-856, Medical Effects of Nuclear Weapons (the guide book to a course sponsored by the Armed Forces Radiobiology Research Institute, AFRRI, Bethesda, Maryland), the free radicals and hydrogen peroxide molecules created from ionized water can be converted back into water molecules by vitamins A, C, and E, glutathione, and the mineral selenium. Vitamins A, C, and E, glutathione help to scavenge free radicals as they are formed by ionization and prevent oxidation type damage. The natural enzyme catalase breaks down hydrogen peroxide into harmless water and oxygen. Selenium as a dietary supplement has a similar function in combination with glutathione. Animal experiments on the benefits of vitamin E for protection against large doses of radiation are reported graphically in that guide. In control experiments (no vitamin E supplement present in the body at exposure time), there was 90% lethality within 30 days after 750 R and 100% lethality within 30 days after 850 R. When vitamin E was supplied, there was 100% survival at 30 days after 750 R and 60% survival at 30 days after exposure to 850 R. Hence, vitamin E can cause a massive enhancement on survival probability after radiation damage, by helping to eliminate radiation caused free radicals before them can cause any damage to DNA. Ignorant anti civil defence propaganda ignores all the hard won scientific evidence and then claims falsely that there is no protection possible by any means, least of all dietary supplements. It is true that the doses of natural anti-oxidants needed for protection against lethal radiation exposure can cause toxic side-effects in some cases, but if the alternative is the lethal effect of radiation then such side effects may be acceptable. The guide also shows that the LD50 from radiation only at the Chernobyl nuclear disaster in 1986 was 600 rads, compared to just 260 rads for 97 Nagasaki personnel with who received thermal burns in addition to nuclear radiation. The nuclear radiation proved more lethal in combination with thermal burns because the burns wounds became infected at a time when the radiation temporarily suppressed the white blood cell count (which occurs from 1-8 weeks after exposure), preventing the infections from being fought effectively by the immune system. Preventing thermal burns by simply ducking and covering therefore massively increases the nuclear radiation LD50.

Dust and Smoke Effects

Bridgman's Chapter 13 is on "Dust and Smoke Effects" which of course is not included at all in Glasstone and Dolan (1977). Hype began in 1983 by Carl Sagan et al. ("TAPPS") for a new temporary ice age due to a temperature reduction caused by smoke clouds from mass fires blocking sunlight after a nuclear attack. In firestorms like that at Hamburg or Hiroshima (after a nuclear detonation), a wood-frame construction, highly flammable city (which no longer exist in modern countries), the soot was accompanied by moisture and all visible sign of it had come down as a "black rain" within an hour or so of the explosion. We have documented in some detail many of the gross falsehoods about thermal ignition due to nuclear weapons in forests and cities in an earlier post. Early editions of The Effects of Nuclear Weapons grossly exaggerated thermal ignition.

Smoke and dust clouds are rapidly produced near at ground level which shield material from ignition by the remainder of the thermal radiation flash; the early part of the flash does not penetrate deeply enough into the material to cause ignition, just ablation type smoke emission which shields the underlying material. This is before shadowing effects in a forest or city are included (at significant distances, the thermal pulse is over by the time the blast arrives and causes the possible displacement of objects which shield thermal radiation). While it is true that a room in a wooden hut deliberately crammed full of inflammable rubbish, with a large window facing ground zero without any obstruction, underwent nearly immediate "flashover" after the Encore nuclear test, an identical set up nearby with a tidy room without the inflammables did not undergo burn down: some items were scorched, but they burned out without setting the room on fire. In addition, people in brick or concrete buildings near ground zero in the Hiroshima firestorm were able to put out fires and prevent their buildingd from burning down.

They did not die from radiation, blast, heat, smoke or carbon monoxide poisoning. Nuclear tests on oil and gas storage tanks in the Nevada showed that even at the highest peak overpressures and thermal radiation fluences tested, they did not ignite or explode even where they were blasted off their stands, dented by impacts, or otherwise damaged. The metal containers easily protected the contents from the brief flash of thermal radiation, while the blast wave arriving some time later later failed to cause ignition. Individual leaves cast shadows on wooden poles at Hiroshima, proving that even very thin materials stopped an intense thermal radiation flash. No mention let alone analysis of any of this solid nuclear weapons effects evidence is done by any of the "nuclear winter" doom mongers, who falsely assume that somehow everything will ignite and then undergo sustained burning like a dry newspaper in a direct line of sight of the fireball.


Bridgman on page 460 explains that:

"These fires will be set by the thermal flash of thousands of separate nuclear bursts. However, the bulk of the burning and smoke generation will occur hours after the nuclear fireballs have risen to their ultimate altitudes. This the smoke, like the smoke from any fire, should remain in the troposphere. This should be the case even if violent fire storms were generated [like Hiroshima and Hamburg]. These tropospheric smoke particles would be subject to the same removal mechanisms [as tropospheric fallout], namely rainout. The mean-life of tropospheric particles was given as about 20 days ... recent observations from the Gulf War oil field fires, indicated that the tropopause rose with the top of the smoke cloud preventing stratospheric injection. It was postulated that the stable air resisted descrnding to replace the buoyant air. Furthermore the real smoke particles cooled at night and became negatively bouyant [descending at night]."

Space Effects

Chapter 14 is "Space Effects". Bridgman begins by pointing out that explosions above 100 km altitude occur in a virtual vacuum, so there is no significant local x-ray fireball at the burst altitude (which requires air around the bomb to absorb x-rays), although x-rays going downward will produce an x-ray heated pancake of air at an altitude of around 80 km, centred below the detonation point. (X-rays and neutrons are more penetrating than x-rays of course, and will be mainly absorbed in a layer at an altitude of around 30 km.)

However, although they don't produce local x-ray fireballs around the detonation location, high altitude bursts above 100 km do produce UV (ultraviolet) fireballs around the detonation location! The mechanism for the UV fireball in bursts above 100 km is simple and depends on the bomb casing and debris shock wave, which typically carries around 16% of the explosion energy according to Bridgman (x-rays carry 70%, and the rest is nuclear radiation, including 3.8% in residual beta radiation):

"The debris front sweeps up the thin air that it does encounter, imparting kinetic energy to those air molecules. The energized air molecules in the debris-air collision front emit ultraviolet radiation in the 3 to 6 eV range. Thus UV radiation travels outward ahead of the debris-air collision front, at light speed. The cool air ahead of the front ... absorbs the UV radiation ... which produces an [ionized] UV fireball. ... Recombination between the ionized or dissociated molecules in the UV fireball is very slow due to the low density of the particles at altitudes of 100 km and higher. As a result, the UV fireball has a lifetime of 3 to 15 minutes. During this lifetime both magnetic buoyancy and buoyancy due to the heating of the ionized aur cause the UV fireball to rise, lofting the ionized region hundreds of kilometres upward. ...

"Outside of the UV fireball, especially below it, some UV radiation will be absorbed by the air, heating that air without achieving ionization. This heated neutral air will also rise as it expands."

The expanding ionized UV fireball acts as a diamagnetic cavity or bubble, excluding the earth's magnetic field and thus causing the earth's magnetic "field lines" to be excluded and compressed outside the bubble. This causes a magneto-hydrodynamic (MHD) shock wave, producing the slow MHD-EMP to be propagated. Even when the actual expansion halts, the buoyant rise of the ionized bubble through the magnetic field produces another MHD-EMP effect from the motion of the ionized charge in the bubble (electrons quickly escape, leaving a net positive charge of slower moving ions in the bubble). KINGFISH (410 kt at 95 km altitude on 1 November 1962) is used by Bridgman to illustrate the UV fireball and the downward beta and ion "kinetic energy patch" or streamer, which follows the direction of the earth's magnetic field lines (the charged particles spiral around the earth's magnetic field vector).

Bridgman adds that the local UV fireball diminishes at very great altitudes and may not be formed above 500 km (it was trivial in the STARFISH test at 400 km altitude). In such extremely high altitude bursts, the only local light source is the bomb debris itself. The bomb debris and any accompanying re-entry vehicle mass (after it cools by emitting most of its energy as x-rays) is an expanding shell which is assumed to carry 16% of the total explosion energy as kinetic energy, E = (1/2)Mv2, implying a bomb debris velocity of 1,640 km/s for a 1 Mt weapon with a mass of 500 kg. This is of the same order of magnitude as the measured STARFISH debris velocity. Bridgman points out that this debris kinetic energy can produce large forces when striking nearby space satellites or re-entry vehicles.

On page 471, Bridgman gives a neat explanation of the Argus "magnetic reflection" effect of trapped electron shells. Electrons spiral around the earth's cived magnetic field vectors from conjugate points at 100-200 km altitude in each hemisphere, being "reflected" back at each conjugate point. How does the reflection process work? Bridgman explains that the conservation of energy applies to the kinetic energy of the electron's velocity component perpendicular to, and the kinetic energy of the electron's velocity component parallel to, the earth's magnetic field vector or imaginary "line".

Therefore, the sum (1/2)Mvperpendicular2 + (1/2)Mvparallel2 is a constant. Hence, as the electron approaches the conjugate point where the magnetic field lines converge together, its velocity perpendicular to the lines increases at the expense of its velocity parallel to the lines, due to conservation of energy. So the electron ever slows down in its approach toward the conjugate point as the magnetic field lines converge, but momentum carries it on past that point at which it would simply stop altogether (and merely cicle the magnetic field line), so there is then a force on it to reverse its direction parallel to the field line, and it begins to spiral back around the field line towards the other conjugate point. There the process is repeated, unless the electron happens to be captured by an air molecule in the low density air at 100-200 km. The capture of a sufficient flux of electrons at the conjugate points by air causes auroral effects; this is also the mechanism for the natural "northern lights" and "southern nights" (where cosmic radiation trapped by the earth's magnetic field gradually leaks into the atmosphere at magnetic conjugate points in each hemisphere).

In addition to simply bouncing north-south between conjugate points, the trapped electrons drift eastwards (in the same direction as the earth's rotation, but much faster than earth's rotation) and rapidly form a trapped shell of electrons surrounding the planet. Bridgman explains that the eastward drift is similar in mechanism to the reflection effect (in other words, you resolve the electron motion in two perpendicular directions and apply conservation of energy to the sum of these two kinetic energy components), but instead of the mechanism being the convergence of magnetic field lines near the pole, the mechanism is the vertical decrease in earth's magnetic field strength with increasing altitude above the earth.

Bridgman then discusses the effect of electron belts on communications and radar. In the natural atmosphere, there is an electrically conductive "ionosphere" caused by solar and cosmic radiation at altitudes above 60 km. The higher "D" and "E" layers typically contain 10 times as many electrons per cubic centimeter in the daytime than at night, due to the absense of solar radiation produced ionization at night when many electrons can recombine with ions. The lowest or "D" layer is around 80 km and contains around 1010 electrons/m3; the "E" layer is around 100 km up and contains around 2*1011 electrons/m3 in the daytime, while the "F" layer is at 250-500 km up and contains 1012 electrons/m3 in the daytime. Because of these free electrons, the layers are electrically conductive and can thus reflect radio waves like a metal plate (or like visible light reflecting off a mirror), but less effectively because the electron density and thus conductivity is much smaller.

LF radio waves are reflected back to earth by the lowest or "D" layer; MF is reflected back by the "E" layer, but HF radio waves penetrate both of those layers (albeit with some refraction) and are only finally reflected back to earth by the "F" layer. At frequencies above 30 MHz, an increasing fraction of the radio waves are able to penetrate through all the layers and escape into outer space.

The patches of ionization and the electron shells produced by a high altitude nuclear explosion are in effect additional or enhanced ionospheres. If the electron densities are pumped very high, even VHF and UHF signals (which are not normally affected by the natural ionosphere) can be stopped or seriously attenuated by the electron shells, which can degrade communications like satellite links which pass through the ionosphere (although you can easily increase the up-link power from an earth based transmitter to a satellite to overcome attenuation, the transmission power from the satellite is limited by its small power supply, so if there is a large attenuation in signal strength, it may not be possible to receive a down-link signal from the satellite which exceeds the noise level sufficiently). See also EM-1 chapters here and here.

(This blog post will be updated as time permits; I intend to briefly review the civil defence related effects physics in each chapter. It would be a good idea if the effects material were published as a revised and updated replacement of the traditional unclassified Glasstone book.)


Capabilities of Nuclear Weapons_Part I -


Capabilities of Nuclear Weapons_Part II -


DCPA Attack Environment Manual -

Wednesday, November 05, 2008

EMP effects from surface bursts, tower bursts, and free air bursts (not high altitude bursts)



Above: nuclear lightning observed in film of the 10 megatons H-bomb test, Ivy-Mike, Elugelab Island, Eniwetok Atoll, 1 November 1952 (click on photos for larger view). The nuclear lightning was visible clearly at times of 3-75 milliseconds after burst. (Images are taken from the excellent quality Atomic Energy Commission film, "Photography of Nuclear Detonations", embedded below.) The nearest lightning bolts (between the sea water around the island and the non-thunderstorm scud cloud) are both 925 metres from ground zero, and other lightning flashes at are 1,100, 1,280 and 1,380 metres from ground zero. The best estimate, by J. D. Colvin, et al., "An Empirical Study of the nuclear explosion-induced lightning seen on Ivy-Mike", Journal of Geophysical Research, v92, 1987, p5696, is that each lightning bolt carried between 150 and 250 kA. The lightning bolts curve to follow constant radii around ground zero, corresponding to equal intensities of air conductivity and EMP Compton current.






EMP ("radioflash") is also emit by conventional chemical explosives, due to the charge separation: exploding TNT ionizes some of the product molecules at a temperature of thousands of degrees C, thereby propelling some free electrons outwards faster than the heaver ions, which causes a charge separation, and thus an EMP emission, just like radio emission from electric charge moving in an antenna (in cases where there is asymmetry caused by the ground or other absorber on one side of the explosion). Chemical explosive EMP was first reported in 1954 in Nature v173, p77. The peak electric field strength falls off by the reciprocal of the cube of distance near the detonation, but only inversely with distance far away. Extensive EMP measurements were reported for TNT explosions by Dr Victor A. J. van Lint, in IEEE Transactions on Nuclear Science, volume NS-29, 1982, pp. 1844-9. He showed that chemical explosion surface burst EMP is vertically-polarized and first peaks in the negative direction (i.e. due to free electrons moving upwards, or "conventional current" moving downwards) at 8 milliseconds after detonation. The average first peak electric field strength for 46 kg of TNT ranged from -389 v/m at 35 metres distance to -5.20 v/m at 140 metres.

In chemical explosion, EMP creation is limited to the hot fireball region where air is ionized by the heat. But in a nuclear explosion, the Compton effect produces an EMP far more effectively, with gamma rays knocking electrons off air molecules in the forward direction, even well outside the hot fireball.


Above: test firing controller Dr Herbert Grier of E.G. & G at Operation PLUMBBOB in 1957 when EMP was well known (which is why - if you click on the photo for an up-close view of the Nevada nuclear test control console - you can see that it is actually very ruggedly constructed to deliberately survive EMP). During the count-down, the Nevada Test Site main power supply technicians were warned deliberately over loudspeakers just before detonation: 'Stand by to reset circuit breakers'. E.G. & G. were responsible for all American electronics at atmospheric tests in both the Pacific and Nevada. They set up the firing circuits for the bombs, laid the cables to the bomb, set up circuits linked to the firing circuit so that high-speed cameras would be turned on at the right time to film the fireball, did the count-down and 'pressed the button' (or rather, didn't press the stop button on the automatic sequence timer). For tower shots and surface bursts, EMP surges induced near the detonation of thousands of amperes were conducted in the cables back to the control point, ruining equipment and escaping by cable cross-talk (mutual inductance due to magnetic fields in the insulator between parallel unconnected cables!) into other circuits, such as the telephone system, which had to be switched over to diesel generator power at shot time to isolate it from damage. EMP fused cable conductors together, arched over porcelain insulators and lightning surger protectors, welded the contacts on relays together, permanently pegged meter dials over to full scale, and burned out other electronic components. E.G. & G. kept the EMP data secret and did not even tell the U.S. Department of Defense, which was merely measuring long-distance radiated EMP for weapons diagnostic purposes and for detecting foreign atmospheric nuclear tests. This is why close-in (source region) EMP cable pick-up and coupling damage was ignored until 30 April 1961, when B. J. Stralser of E.G. & G. wrote a Secret - Restricted Data report on all the EMP damage from the 50s tests, Electromagnetic effects from nuclear tests, which we will discuss in detail together with a Russian and British EMP effects reports from 1959, and French EMP effects reports from their first and fourth nuclear tests in the Sahara desert.

‘The objective of Mike Shot was to test, by actual detonation, the theory of design for a thermonuclear reaction on a large scale, the results of which test could be used to design, test, and produce stockpile thermonuclear weapons... Quantitative measurements of the gross explosion-induced electromagnetic signal were made possible by first displaying portions of that signal on the faces of cathode-ray tubes. The results of these efforts were excellent... On Mike Shot the early electromagnetic signal was displayed in sufficient detail to allow a rough measurement of the time delay between primary and secondary fission reactions.’

Stanley W. Burriss, Operation Ivy, Report of Commander, Task Group 132.1, Los Alamos Scientific Laboratory, weapon test report WT-608, 1953, Secret – Restricted Data, pp. 7-13.



Above: the dramatic visible EMP-related lightning bolts induced by the 10.4 Mt Ivy-Mike detonation around the fireball, Eniwetok Atoll, 1 November 1952. The nuclear lightning flashes at about 1.4 km from ground zero, around the Mike fireball, were visible in the film from 4-75 milliseconds after burst. Castle shots in 1954 produced similar effects. (Reference: M. A. Uman, et al., 'Lightning induced by thermonuclear detonations', Journal of Geophysical Research, vol. 77, p. 1591, 1972. For more up to date theoretical interpretation see: R. L. Gardner, et al., 'A physical model of nuclear lightning', Phys. Fluids, vol. 27, issue 11, p. 2694, 1984; R. F. Fernsler, Analytical model of nuclear lightning, NRL Memorandum Report 5525, 1985; and E. R. Williams, et al., 'The role of electric space charge in nuclear lightning', J. Geophys. Res., vol. 93, 1988, pp. 1679–1688. The latest research suggests that the nuclear lightning bolts around Mike fireball carried vertical currents of 100,000-1,000,000 Amperes.) The mechanism of nuclear lightning was predicted by the physicist Enrico Fermi (who developed the original theory of beta decay, and also built the first graphite moderated nuclear reactor - water moderated nuclear reactors have of course occurred naturally long ago in uranium ore seams at Gabon in Africa) in 1945, as reported by Robert R. Wilson in his ‘Summary of Nuclear Physics Measurements’ (in K.T. Bainbridge, editor, Trinity, Los Alamos report LA-1012, 1946 (declassified and released as LA-6300-H, p. 53, in 1976):

‘... the gamma rays from the reaction will ionise the air... Fermi has calculated that the ensuing removal of the natural electrical potential gradient in the atmosphere will be equivalent to a large bolt of lightning striking that vicinity ... All signal lines were completely shielded, in many cases doubly shielded. In spite of this many records were lost because of spurious pickup at the time of the explosion that paralysed the recording equipment.’

The earth has a natural vertical potential (electric field) between ground and ionosphere; the ionization of the air by bomb radiation suddenly makes the air conductive, shorting out the natural electric field and thereby inducing lightning discharges to flow vertically through the relatively conductive air.

* Link to PDF download of DNA-EM-1 “Capabilities of Nuclear Weapons” Chapter 7 “Electromagnetic Pulse (EMP) Phenomena” (40 page, 1.3 MB)

‘The generation of EMP from a nuclear detonation was predicted even before the initial tests, but the extent and potentially serious degree of EMP effects were not realised for many years. Attention slowly began to focus on EMP as a probable cause of malfunctions of electronic equipment during the early 1950s. Induced currents and voltages caused unexpected equipment failures during nuclear tests, and subsequent analysis disclosed the role of EMP in such failures.’ - Philip J. Dolan, Capabilities of Nuclear Weapons, DNA-EM-1, p. 7-1, 1978 (change 1).


Above: close-in EMP data for 100 kt and 1 Mt surface bursts on soils of various electrical conductivities. The plots shows the relationship between EMP fields (as well as air conductivity) and the peak blast wave overpressure at the corresponding distance.

Philip J. Dolan's Capabilities of Nuclear Weapons DNA-EM-1 has been discussed in previous posts on this blog, e.g. here (history of the publication) and here.

Chapter 7: Electromagnetic Pulse Phenomena (PDF download), 40 pages

This vital chapter has some graphs deleted for high altitude bursts which are now available from another document but it includes a vital set of surface burst EMP data in figures 7-25 to 7-35, showing the peak air conductivity, magnetic and radial as well as transverse electric field strengths as a function of peak overpressure for 100 kt and 1 Mt, as well as the waveforms for four locations and the frequency spectra derived from the waveforms using Fourier analysis. The graph showing the radial Compton current in a surface burst has been deleted from the chapter, but it can be seen from another report openly available on surface burst EMP physics: see Fig 3-2 on page 34 of the report by Conrad L. Longmire and James L. Gilbert, Theory of EMP Coupling in the Source Region, Defense Nuclear Agency, report DNA 5687F, DTIC document reference ADA108751. Also Fig 3-3 on page 37 gives air conductivity, although this data is only partially deleted from DNA-EM-1 Chapter 7, since although one graph of air conductivity versus time at 500 m distance is deleted, another set of curves giving air conductivity versus time for four separate distances corresponding to various peak air blast overpressures which include 500 m for the highest intensity, are available.

It should be emphasised that this data on the close-in or 'source region' EMP in surface bursts is vital for civil defence because the EMP damage in such bursts doesn't occur due to radiated EMP but instead occurs due to the coupling of the strong short-range EMP fields into metallic conductors like electric cables, pipes, railroad tracks, etc., near ground zero which then carry an electric current surge outward at the velocity of light for the insulator. The EMP damage in a surface burst occurs because of the many thousands of amperes of EMP electric current induced in such conductors which is carried out by such conductors to great distances from the burst point with little attenuation, getting distributed throughout the electric power grid out to tens or hundreds of miles away, where it causes damage to unprotected electrical and electronic equipment. The radiated EMP signal from a surface burst is usually too weak to cause much permanent damage to equipment (apart from the case of very tall vertical antenna such as radio transmitter masts). Instead, the serious threat is the electric current pulse induced in cables by the close-in radial electric field of a surface burst, which is piped out of the source region by long cables stemming from the source region (which carry the EMP away at light speed long before they are damaged by the slower moving ground shock, blast wave and cratering action). This is the cause of the long-distance devastating problem of EMP in power networks and communication line far away from a surface burst.

This has been well demonstrated at nuclear tests where the bomb was detonated by cable control, with the cables carrying back EMP as an electric power surge to the control point and damaging the control panel and its power network equipment. This effect was first publically documented by Bernard O'Keefe of EG & G - Edgerton, Germeshausen and Grier - in his 1983 book 'Nuclear Hostages' for the three 1948 Sandstone tests which were cable controlled at Eniwetok. Free air bursts like Crossroads-Able and many early tests at the Nevada in 1951 did not cause this effect because the bombs did not have any cables nearby to pipe out an electric signal. The Crossroads-Baker underwater test was set off by radio signal to the ship above the bomb (which was soon blasted to pieces by the shock wave anyway), preventing any direct cable connection between the control ship and the bomb itself, so no EMP damage problems were there reported.

In the British underwater Hurricane test of 1952, there were EMP damage problems because of the use of cables and radio signals from the ship carrying the bomb to recording stations. British nuclear test scientist N. F. Moody set up an experiment involving electric cables running from the Hurricane nuclear bomb ship (HMS Plym) at Monte Bello, Australia, designed to carry gamma radiation dose rate data for Hurricane to a magnetic tape recorder at a safe distance from the blast effect, to measure the bomb’s nuclear reaction acceleration rate on a nanosecond time scale. But immense EMP energy carried by those cables burned out the instruments, leading to extensive British research into EMP. By 1957, at the British nuclear tests Operation Antler in Australia, the gamma ray spectrometer (to determine the spectrum of the initial gamma radiation flash) was specially protected against EMP interference by using an electric power supply sealed in a steel locker, with all the electric cables running through sealed metal pipes to the instrument.

‘It was necessary to place most of the [1945 Trinity nuclear test measurement] equipment in a position where it had to withstand the heat and shock wave from the bomb, or alternatively to send its data to a distant recording station before it was destroyed. We can understand the difficulty of transmitting signals during the explosion when we consider that the gamma rays from the reaction will ionise the air... Fermi has calculated that the ensuing removal of the natural electrical potential gradient in the atmosphere will be equivalent to a large bolt of lightning striking that vicinity [this is precisely what was actually photographed around the fireball of the Mike 10.4 Mt thermonuclear test in 1952, see top of this blog post for the photograph and literature references for nuclear lightning] ... All signal lines were completely shielded, in many cases doubly shielded. In spite of this many records were lost because of spurious pickup at the time of the explosion that paralysed the recording equipment.’

Robert R. Wilson, ‘Summary of Nuclear Physics Measurements’, in K.T. Bainbridge, editor, Trinity, Los Alamos report LA-1012, 1946 (it was only declassified and released as LA-6300-H, p. 53, in the year 1976).

‘[At the Sandstone-X Ray 37 kt nuclear test on April 15, 1948, from a 200 ft tower on Enjebi Island at Eniwetok Atoll] we had to watch the control panel [in the control room 30 km away] ... lights flashed crazily on and off and meters bent their needles against their stop posts from the force of the electromagnetic pulse travelling down the submerged cables with the speed of light... one of our engineers, halfway around the world in Boston... was able to detect [the radiated EMP or radio-flash] with a makeshift antenna and an oscilloscope, the world’s first remote detection measurement.’

Bernard J. O’Keefe (President of Edgerton, Germeshausen and Grier, Inc.), Nuclear Hostages, Houghton Mifflin, Boston, 1983.

On 30 April 1961, B.J. Stralser’s report Electromagnetic effects from nuclear tests, Edgerton, Germeshausen and Grier, Inc. (classified Secret – Restricted Data) was the first official American secret report produced summarising the physical damage due to EMP on the power distribution system, telephone system, and testing control equipment at the Nevada test site, due to small surface and near surface (tower) bursts:

1. The radial electric field of the EMP induced electric currents of thousands of amps in bomb electrical cables at 800 m from ground zero, breaking down cable insulation, fusing multicore conductors together, and actually melting the protective lead sheathing surrounding “hardened” cables.

2. EMP opened the circuit breakers at the Nevada test site’s power supply, 50 km from ground zero. Order to technicians at the main power supply, before tests: “Stand by to reset circuit breakers.”

3. Instrument stations have to use power from internal batteries or nearby diesel generators, to avoid EMP pick-up and distribution to equipment with long power cables.

4. In the test control room: fuses were blown, meters overloaded with bent needles, a carbon block lightning protector was permanently shorted to ground, with current arcing over porcelain cut-outs.

5. EMP currents fused the contacts and melted off the pins on electromagnetic relays.

6. The Nevada test site telephone system had to be switched to diesel generator power during tests.

7. Radar oscilloscopes showed the induced transient EMP effect as a “ball of yarn” and “bloom”.

After the EMP damage effects to electronic piezo electric blast gauge chart recorders at the first ever nuclear test Trinity on 16 July 1945, and the EMP damage to the control console dials at Sandstone tests in 1948, the next serious EMP problems apparently occurred with the 1.2 kt Jangle-Sugar test of 1951, which was the first ever cable-controlled test in the Nevada Test Site (after a series of free air burst bombs dropped from aircraft, the detonation of which were controlled by timer/radar sensors instead of by wired cable control).

The bomb control cables from the Sugar test explosion were apparently fused together by over 1000 Amperes at 0.5 mile distance, and the electric EMP power surge in the cable caused a lot of damage at the control point 30 miles away, arching over porcelain cutouts, fusing the contacts of relays together, driving meters off-scale, and apparently escaping by cable cros-talk into other circuits including into the power grid and tripping distant circuit breakers in Las Vegas some 90 miles from the burst. As a result, all further cable-controlled tests at Nevada had to take the precautions of switching off mains power at the Nevada control point at shot time and running the telephone system and other equipment off diesel generators to prevent the EMP power surge escaping into cables to the national power grid.

Technicians were also warned over the loudspeaker during the countdown to 'Standby to reset circuit breakers' after the EMP at shot time. Nevada EMP facts are documented in a 'Secret-Restricted Data' report dated 30 April 1961 by B. J. Stralser of E. G. and G. (Edgerton, Germeshausen and Grier) - which was responsible for doing the countdowns and firing systems at American nuclear tests - called 'Electromagnetic Effects from Nuclear Tests'. E.G.G. were famous for high-speed photography and its associated electronic timing circuits, and it was in this connection that the company was recruited by the Manhattan Project to develop high-speed filming techniques for nuclear tests, which of course had to be set off by an electric signal from the bomb activation mechanism, and this is the reason why they ended up in charge of the timing and firing side of the bomb.

In his book Nuclear Hostages, O'Keefe (head of E.G.G. in 1983) explains how he wired up the Nagasaki bomb's implosion system on Tinian Island, changing an incorrect cable connector with a soldering iron on the assembled bomb so it could be dropped on schedule. At the Nevada test site, the control signal to the bomb in cables was also used to set off high speed cameras and other instrumentation electronics, so E.G. & G. ended up expert in the experimental study of EMP damage by cross-talk between parallel cables and different adjacent circuits. It is clear that there was during the 1950s a problem in getting this secret EMP damage data away from E.G. & G. - who viewed it as a technical nuisance - to the people interested in the damaging effects of nuclear explosions.

Most of the interest in EMP in 1951 by the military was in the use of the radiated radioflash EMP - the well known click on radio receivers when a nuclear bomb flash goes off - as a convenient electronic means to remotely detect and identify a nuclear explosion, which has nothing to do with the damaging effects of EMP piped out of the source region by conductors like cables.

Even as late as 1957, only a very brief single-paragraph discussion of EMP pick-up effects from low altitude and surface bursts occurs in the November 1957 edition of the Confidential (classified) U.S. Department of Defense, Armed Forces Special Weapons Project manual TM 23-200, Capabilities of Atomic Weapons, section 12, “Miscellaneous Radiation Damage Criteria”, page 12-2, paragraph 12.2c:

Electromagnetic Radiation. A large electrical signal is produced by a nuclear weapon detonation. The signal consists of a rather sharp transient signal with a strong frequency component in the neighborhood of 15 kilocycles. Field strengths greater than 1 volt per metre have been detected from megaton yield weapons at a distance of about 2,000 miles. Electronic equipment which responds to rapid, short duration transients can be expected to be actuated by pickup of this electrical noise.”

Notice that they are completely ignoring the source region cable EMP pick-up problem that E. G. and G. had identified with nuclear tests since 1948 in the Pacific (Operation Sandstone cable-controlled tower bursts) and 1951 (Nevada cable-controlled Sugar shot, etc.), and just commenting on the long-distance radiated EMP as an 'electrical noise' problem! The source-region radial EMP in a surface burst or near surface burst is on the order of 100,000 v/m, and it is the pick-up of this EMP which induces massive currents in cables that then disperse it outside the source region. The radiated EMP outside the source region is weak so it has nothing to do with the damage problem in low altitude bursts! So EMP information was stuck with the people dealing with EMP damage in cables, and it wasn't even getting into the classified manual!

The major reference on the physics of cable pick-up from the source-region which is cited by Dolan's DNA-EM-1 secret manual is Dr Conrad L. Longmire's report Ground Fields and Cable Currents Produced by Electromagnetic Pulse from a Surface Nuclear Burst, DASA 1913, DASIAC SR-54, Defense Atomic Support Agency, March 1968. It is clear that the partitioning of secret departments in the 1950s was responsible for nuclear test data on EMP damage not being widely recognised as a civil defence and also a military problem until the 1960s when substantial funds were allocated to do serious research into EMP mechanisms for damaging effects.

One of the major problems in generalizing EMP pickup into conductors from the source region is that the EMP coupling into cables depends on the ground permittivity or dielectric constant and the conductivity of the ground, which are both dependent upon the EMP wave frequency, depending very strongly on the moisture and salt content of the soil, a problem first analysed fully by Smith and Longmire in the October 1975 report A Universal Impedance for Soils, DNA 3788T. Longmire has also written a brief and simple account of another EMP problem, the System Generated EMP, SGEMP or 'Direct Interaction' EMP, caused by nuclear radiation striking electrical and electronic systems and inducing EMP pulses directly without the mediation of EMP fields: Direct Interaction Effects in EMP, DNA 3249T, 1974.

THE RADIATED EMP AT GREAT DISTANCES FROM AN AIR BURST AND FROM A SURFACE BURST

FIRST OPEN RUSSIAN PUBLICATION ABOUT NUCLEAR BOMB EMP IN 1958

The declassification of the existence of radiated EMP/radioflash for the 1962 edition of The Effects of Nuclear Weapons (the first edition to mention it) was finally triggered by Russia! It was the fact that Russia was concerned with EMP damage that forced America to start taking the threat seriously and to do detailed investigations, getting E.G. & G. to write the report on EMP damage in nuclear tests.

In December 1958, Russian scientist A.S. Kompaneets published openly a theory of “Radio Emission from a Nuclear Explosion” (Zh. Eksperim. I Teor. Fiz., Vol. 35, pp. 1538-42), which was later discredited by Dr Victor Gilinsky of the RAND Corporation in California, because Kompaneets actually ignored the Compton current (which is the essential mechanism!) and only calculated the effect of the late ionic current in air (which is insignificant and of positive polarity), so that Kompaneets’ predicted EMP waveform misses out the massive fast-rising negative electric field due to the Compton current, and only features the small, delayed, positive electric field due to the ionic current!

Russian data on EMP had come not from measuring the EMP by photographing the pulse on oscilloscope screens like American and British work, but by measuring the distance sparks would jump over spark gaps, and by assessing the burn out of electronic equipment. So Russian work was concerned with directly measuring end effect of induced current EMP pulses, not the sophisticated measurements of the free field EMP waveforms radiated in the air by the explosions. The stimulus of the Russian article in December 1958 coincided with the first secret British-American exchange of EMP data that same month (although the English translation of Kompaneets paper was not published until June 1959 in J. Exptl. Theoret. Phys. (Soviet Physics JETP), volume 35, No. 6, June 1959, page 1076), which paved the way for the Minuteman missile system to be protected against EMP in 1960, the very first American military system to be designed to withstand EMP!

Although close in EMP damage and distant ‘radio-flash’ (clicks on radio sets) were experienced at the Trinity nuclear test in 1945 and the Sandstone tests in 1948, regular measurements of the EMP waveforms from nuclear tests only began in 1951 at Operation Buster-Jangle in the Nevada desert. M.H. Oleson was in charge of Project 7.1, ‘Electromagnetic Effects from Atomic Explosions’ which was maintained throughout Operations Tumble-Snapper (report WT-537, 1953), Upshot-Knothole (report WT-762, 1955), Ivy (report WT-644, 1958), and Castle (report WT-930, 1958).

Oleson's measurements at 20 km from ground zero in surface and near-surface bursts of yields ranging from 1-20 kt gave vertically polarised electric fields peaking within 1 microsecond at 100-300 v/m in the negative direction. He found that at large distances of 1500 km the direct pulse was distorted and extended by a factor of 10, due to multiple ‘sky wave’ reflections back from the Earth’s conductive ionosphere (80 km altitude). During Operation Castle in 1954, for example, 17 oscilloscope stations measured 74 sets of EMP data, at distances ranging from 23 km to 12,000 km. Vertical aerials 2 metres high for close-in stations and 10 metres high for distant stations were used, with cameras fixed to photograph the screens of oscilloscopes (an ingenious electronic circuit system was used to prevent over-exposure of the film, keeping the electron beam trace off the oscilloscope screen until the last moment before the detonation!). These waveforms showed the internal dynamics of the weapons (the magnitude of the EMP showed the fission yield, while the time delay in the EMP rise steps showed the delay between the fission ‘primary’ and the fusion ‘secondary’ ignitions occurring inside the weapon).



Above: air burst EMP due to vertical asymmetry (air density falling with increasing altitude) where the upward Compton current is stronger than the downward Compton current, because the gamma rays causing it penetrate further in the low density air. This illustration for radiated EMP within 100 km of air bursts (before distortion and increased duration occurs at great distances) is from Dolan's DNA-EM-1, Capabilities of Nuclear Weapons, 1978 update, chapter 7. The approximate numbers stated for peak fields are for typical bomb yields ranging from 1 kt to 10 Mt and burst heights from sea level to 30 km altitude. The smaller sizes and fields correspond to lower yields and burst altitudes and the the bigger sizes correspond to the greater yield and burst altitude. E.g., ~30 v/m is for ~3 km horizontally from a 1 kt sea level air burst, and ~300 v/m is for ~14 km horizontally from a 10 Mt air burst at 30 km altitude. (See Glasstone and Dolan, pages 517-8 and 534-5.)



Above: vertically polarized EMP waveform due to vertical air density gradient induced Compton current asymmetry from an unspecified approximately 1 kt so-called 'air burst' test at about 900 m altitude, at seven different distances (44.6 km to 4,828 km) from ground zero. Actually, the downward prompt gamma radiation shell moving at light velocity, 300 million metres per second or 300 metres per microsecond, hit the ground 900 metres below the bomb after just 3 microseconds; so from 3 microseconds onwards, the radiated EMP phenomena approximated that from a ground surface burst, which is much stronger than EMP radiated due to air density asymmetry in a true air burst which doesn't interact with the earth's surface. As the distance from ground zero increases, ionospheric reflection phenomena always greatly lengthens (stretches out) the EMP waveform, increasing the effective rise time to maximum field strength, and therefore reducing the maximum frequency of the EMP.



Above: illustration of the EMP measured from the Chinese 200 kt shot of 8 May 1966 from an interesting Norwegian Defense EMP compendium (which also includes other measured EMP waveforms from Chinese air bursts) by Karl-Ludvig Grønhaug (who has other EMP reports linked from his page here). It shows the electric-dipole EMP waveform due to vertical asymmetry from a typical air burst after it has been distorted and greatly extended in rise time and duration by ducting between ionosphere and ground over a distance of 4,700 km. The peak EMP measured at 4,700 km from four Chinese detonations are:

8 May 1966: 200 kt air drop bomb gave E = 40 millivolts/metre at 4,700 km
27 October 1966: 20 kt missile test gave E = 40 millivolts/metre at 4,700 km
27 December 1966: 300 kt tower shot gave E = 140 millivolts/metre at 4,700 km
27 December 1968: 3 Mt air drop bomb gave E = 22 millivolts/metre at 4,700 km


To calculate the electric-dipole radiated EMP theoretically, you need to work out the net vertical electric current variation as a function of time (allowing for the increase in air conductivity due to secondary electrons knocked off atoms by the primary Compton electrons, and the reversed conduction current opposing the Compton currents that results from the secondary electrons), and this net varying current or acceleration of charge allows you to work out the radiated EMP using Maxwell's equations.

This isn't that hard to understand what is occurring if you think intuitively about the physics involved. The major contribution to the net Compton current for the first microseconds that are most important is the prompt gamma ray shell, going outward at light velocity, 300 metres per microsecond. So we concentrate on that pulse for now and ignore later (less intense) gamma ray emission. The total radial outward Compton current is then simply the prompt gamma ray emission multiplied by (a) the proportion of Compton scatterings which result in net outward electron motion, and (b) the fraction of the prompt gamma rays which have undergone Compton scatterings up to time t.

Since the mean-free path for typical 2 MeV prompt gamma rays is 174 m in sea level air (and greater than this in less dense air, scaling as the inverse as air density), so the fraction of prompt gamma rays that have undergone Compton scattering when the radiation front (moving at velocity c = 300 metres/microsecond) is at radius R = ct metres is simply f = 1 - e-R/174 = 1 - e-ct/174 = 1 - e-300t/174 = 1 - e-1.72t for t in microseconds and for sea level air (but remember that the mean free path of 174 metres has to be altered as a function of time because as the shell goes upwards, it goes into less dense air, so the mean free path increases; in an air burst the downward shell does into denser air so the initial mean free path for that hemisphere gets smaller with time - also in an air burst 174 metres needs to be replaced with the air density at the burst altitude, unless it is a sea level air burst).

Because this upward Compton current is in the opposite direction to Franklin's definition of conventional electric current (which is the direction that positive charges move, i.e. the opposite direction to the motion of electrons), this initial net 'conventional' electric current in a surface burst is negative (the opposite direction by definition to net the flow of Compton electrons). We can easily adapt this equation to include air density variation in the vertical direction, and then for an air burst we need to write two versions of this equation: one for the hemisphere above the bomb and another for the hemisphere below it. By subtracting the latter from the former, we get the net vertical Compton current variation as a function of time in an air burst. Dividing that into the net vertical current in a surface burst, gives us the ratio of net vertical Compton currents for air and surface bursts as a function of time after burst.

There is later a positive pulse of conduction electrons because the 2 MeV prompt gamma rays produce 1 MeV Compton electrons, which soon get slowed down by colliding with air molecules and knocking off 'secondary electrons' from those air molecules. Since it takes 34 eV to knock an electron off an atom, each 1 MeV Compton electron produces 29,000 secondary electrons, which increase the air's conductivity and cause a 'return current' that opposes (and eventually shorts out) the Compton current, predominating after a few microseconds. Ion-electron plasma oscillations, and contributions of radiation from neutron scatter gamma rays, decaying fission products, and neutron capture gamma rays then produce the long 'tail end' to the EMP.

The integrated net upward vector current going in the upper hemisphere of air is equal to exactly 1/3rd of the total radial Compton current in that hemisphere. However, although this looks like a similar situation to a simple vertical dipole antenna in radio transmission theory, in fact it's a lot more difficult than a simple radio transmitter calculation, because much of the net radial currents in surface and air bursts exist within a region of ionized conductive air, which attenuates the radiated EMP to some extent before it can escape from the gamma ray deposition region to large distances. This is why detailed computer calculations are needed to accurately predict EMP field strengths radiated from air and surface bursts: early theoretical efforts in the 1950s and 1960s usually over-estimated the radiated EMP from such bursts by ignoring the attenuation.

An interesting empirical finding reported in the 1950s Nevada and Pacific investigations on the EMP radiated from surface bursts is that the median frequency of the EMP gets smaller for higher yields: the median frequency measured 20 km away is 41,000/H Hertz (Hz), where H is the effective height of the source region which is behaving as a vertical antenna in a surface burst. For yields of 1 kt to 1 Mt, H increases from about 1.5 to 4 km, so that the median EMP frequency in a surface burst actually falls with increasing yield, from about 30 kHz for 1 kt to about 10 kHz for a 1 Mt detonation.

In a surface burst, the EMP waveform is similar but the first half cycle is the most intense: basially the air burst EMP waveform is the surface burst EMP waveform multiplied by a correction factor which increases from 0 to 1 as time progresses. Initially, the air burst radiated EMP is weaker than that for a surface burst because the vertical asymmetry takes time to gradually increase as the radiation region extends outwards in all directions at 300 metres per microsecond. But at very late times, the EMP waveform for an air burst and a surface burst are identical. So the main difference is that the first half-cycle (the negative initial pulse) of the EMP is strongest in a surface burst, while in an air burst the second (positive) half cycle is strongest:


Above: surface burst EMP measured at a distance of 320 km from a high-yield Pacific nuclear test. It peaks after 4 microseconds at about -26 v/m in the first (negative) half-cycle. The rise time and duration is much greater in this example than surface burst EMP measured at 20 km distance. The greater the distance from the surface burst, the longer the peak intensity rise time, because the EMP wave form gradually loses higher frequencies as it propagates.

Above: Norwegian computer calculations that attempt to give an idea of the transverse (radiated) EMP waveforms from a 1 kt surface burst at distances of 1 km and 10 km, but they exaggerate the predicted intensities probably because they do not properly include the attenuation of the pulse from the net vertical currents by the ionized air through which they must travel before escaping from the conductive air of the deposition region: but at least they do indicate a much briefer rise time of the radiated EMP at distances near the detonation. There are far more comprehensive computer calculations of the surface burst close-in EMP in Chapter 7 of Dolan's DNA-EM-1, Capabilities of Nuclear Weapons.

Above: surface burst radiated EMP description from Chapter 7 of Dolan's DNA-EM-1, Capabilities of Nuclear Weapons.

Above: EMP from the 5.9 kt Hardtack-Holly surface burst (4 metres burst altitude on a barge) at Eniwetok Atoll on 20 May 1958, as recorded 8,000 km away in Los Angeles. Notice that it is completely distorted and grossly extended in duration with the initial half-cycle now positive instead of negative; this is purely a distance-related distortion effect (the loss of the higher frequencies occurring while pulse propagated between ionosphere and ocean around the Earth) and doesn't indicate the shape of the EMP waveform nearer the detonation which peaked in the negative direction at a much earlier time.

FIRST AMERICAN OPEN PUBLICATION ABOUT EMP IN 1959

The first American unclassified (open) article about EMP was published in Nucleonics volume 17, August 1959, page 64-73. This was an article by Dr J. Carson Mark of Los Alamos (Director of the Theoretical Division there from 1947-72), entitled 'The Detection of Nuclear Explosions'. Dr Mark points out that radiated EMP or radioflash can be used to detect nuclear explosions thousands of kilometres away, but he does not mention the damaging effects of EMP.

SECOND RUSSIAN PAPER ON EMP PUBLISHED OPENLY IN 1960

Then in 1960, a second important Russian paper appeared on EMP, by O. I. Liepunskii, 'Possible Magnetic Effects from High-Altitude Explosions of Atomic Bombs', J. Exptl. Theoret. Phys. (Soviet Physics JETP), volume 38, pp. 302-4, January 1960. Liepunskii there pointed out that the hot ionized fireball of a nuclear explosion is electrically conductive and will push out the Earth's magnetic field lines as it expands, producing a weak slow MHD-EMP. However, as with Kompaneets, Liepunskii misses the mechanism for the intense and rapid first pulse of the space burst EMP! Further confusion was added when in 1960 the Physical Review published a paper by physicists at the Aeronutronic Division, Ford Motor Company, and Lawrence Radiation Laboratory on a thermal X-ray mechanism for EMP generation by high altitude bursts:

'The thermal x-rays produced by a nuclear burst in outer space cause polarization currents in the medium which, if distributed anisotropically, will emit electromagnetic radiation. Roughly, a burst of thermal x rays, equivalent in energy to 1 ton of high explosive, produces a detectable 10-Mc/sec signal at a range of 1 km. Since only the ratio of x-ray energy to range enters into the strength of the radiated signal, other ranges follow by adjusting the x-ray energy proportionately. This works up to ~3×103 km; beyond this range, dispersive effects begin to reduce the signal received. The power in the electromagnetic signal varies as the square of the electron density, so this effect may provide a sensitive measure of the density of electrons in outer space.' - Montgomery H. Johnson and Bernard A. Lippmann, 'Electromagnetic Signals from Nuclear Explosions in Outer Space', Physical Review, vol. 119, Issue 3, pp. 827-828 (1960).

FIRST OPEN FRENCH PAPERS ON BOTH RADIATED AND RADIAL (IMMENSE EMP CURRENTS INDUCED BY CLOSE-IN CABLES) FROM ITS FIRST AND FOURTH NUCLEAR TESTS IN THE SAHARA, 1960 AND 1961

The peak EMP at the first French low altitude nuclear explosion in the Sahara, Africa, in 1960 (70 kt) was measured in Paris and openly published to be 0.1 v/m. See M. J. Delloue, ‘L'eclair magnetique du test nucleaire du 13 fevrier 1960 a' Reggane,’ Compt. Rend., vol. 250 (issue 11), page 2536 (1960)

A second French paper giving nuclear test EMP data was more startling for it described the successful measurement of the induced cable currents from a nuclear explosion: J. Ferrier and Y. Rocard, ‘Measure du courant electrique total fourni par une explosion nucleaire’, Compt. Rend., vol. 263, page 2931 (1961).

Ferrieu and Rocard's paper, ‘Measurement of the total electrical current furnished by a nuclear explosion’ (Compt. rend., Vol. 253, 18 December 1961) gives details of an EMP coupling experiment at the fourth French nuclear test, code named Green Gerboise (GERBOISE VERTE), a 1 kt plutonium core tower shot in the Sahara desert at Regganne, Algeria, on 25 April 1961. A network of 250 cables was laid radially, outward from around ground zero (under the tower) to several hundred metres on the poorly conducting desert sand, and the collected EMP current was conducted using a thick brass cable out to a measuring station located at 3 km ground range, where the EMP induced in the cables near the explosion (by the radial electric field) was measured to peak some 20 microseconds after detonation at 150,000 Amperes, falling to zero at 150 microseconds after detonation, and then producing a second peak of 56,000 Amperes, with opposite polarity to the first peak. This immense EMP current shows clearly the magnitude of the threat when a network of cables around the explosion can capture a massive amount of current from the radial electric field (due to radial charge separation) within 3 km of a surface burst, and carries the current out to damage equipment far from the source of the current.

HIGH ALTITUDE EMP TEST EFFECTS FROM RUSSIAN AND AMERICAN TESTS IN 1962

Finally in 1962, when America finally realized just how widespread and potentially devastating the EMP was after Starfish, and when it could detect high altitude Russian explosions investigating the same effects (three tests of 300 kt each at 59-290 km altitudes), President John F. Kennedy announced publically that America was investing in military electronic systems which cannot be “blacked out, paralysed, or destroyed by the complex effects of a nuclear explosion.” As a result of this heightened interest in EMP damage prevention, a discussion of EMP mechanisms was included in the April 1962 edition of The Effects of Nuclear Weapons, pages 502-506 of Chapter X, Radio and Radar Effects (the 1962 section on EMP is quoted in full on the previous blog post here, with criticisms),



AMERICA FINALLY CONDUCTS NEVADA NUCLEAR SURFACE BURSTS IN 1962 FOR THE PRIMARY PURPOSE OF MEASURING THE EMP PICK UP BY CABLES WITHIN THE SOURCE REGION

Following on from the reported EMP pick up at the fourth French test in the Sahara, three Nevada surface bursts in 1962 attempted to document EMP ground fields and cable currents, to varying degrees of success (there were many instrument problems).

On 7 July 1962 the 0.022 kt plutonium bomb test 3 feet above the ground in Nevada, Little Feller II, was documented for determining EMP induced damage effects (rather than merely the waveform for weapons diagnostics or the detection/location of nuclear tests or bomb attacks) for the first time in American testing history (although in 1957 Harry Diamond had measured the magnetic field component of EMP from Operation Plumbbob in Nevada for the purpose of assessing whether EMP would set off magnetic mines, they were not concerned with EMP damage to electronics). It was a standard U.S. Army tactical 'Davy Crockett' miniature nuclear bomb. An electric cable buried at a depth of 30 cm was located from 15 metres of ground zero radially outwards, and the induced EMP current pulse in the cable was measured at various distances by digital meters which saved their data on protected magnetic tape recorders. This experiment was repeated at the 0.5 kt Johnie Boy U-235 bomb test on 11 July 1962, which was detonated 58 cm underground. On 14 July 1962, the 1.65 kt plutonium bomb test Small Boy detonated 10 feet above ground was instrumented to document a complete set of EMP waveforms for radial and transverse electric field, azimuth magnetic field, and the air conductivity variation with time at distances of 190 to 3000 metres from ground zero.

(References: V.E. Bryson, et al., "Weapons Effects Testing, EM Pulse, Project 6.1", Boeing Company, Operation Dominic II, weapon test report WT-2226, June 1963, Secret - Restricted Data. Paul A. Caldwell, et al., "Magnetic Loop Measurements, Project 6.2", Harry Diamond Laboratories, Operation Dominic II, weapon test report WT-2227, February 1965, Secret - Restricted Data. R.W. Frame, "Electromagnetic Pulse Current Transients, Project 6.5", Sandia Corporation, Operation Dominic II, weapon test report WT-2230, October 1963. D.B. Dinger, "Response of Electrical Power Systems to Electromagnetic Effects of Nuclear Detonations, Project 7.5", U.S. Army Engineer Research and Development Laboratories, Operation Dominic II, weapon test report WT-2241, June 1963.)

According to the 'DTRA Factsheet on Operation Dominic II':

'Operation DOMINIC II was an atmospheric nuclear test series conducted by the Atomic Energy Commission (AEC) at the Nevada Test Site (NTS) from July 7-17, 1962. The operation consisted of four low-yield shots, three of which were near-surface detonations and one a tower shot. One of the near-surface shots was fired from a DAVY CROCKETT rocket launcher as part of Exercise IVY FLATS, the only military training exercise conducted at DOMINIC II. An estimated 3,900 Department of Defense (DoD) personnel participated in Exercise IVY FLATS, scientific and diagnostic tests, and support activities. The series was intended to provide information on weapons effects and to test the effectiveness of the DAVY CROCKETT weapon system under simulated tactical conditions. Also known by the DoD code name of Operation SUNBEAM, DOMINIC II was the continental phase of DOMINIC I, the atmospheric nuclear test series conducted at the Pacific Proving Ground from April to November 1962. ...

'The scientific tests at DOMINIC II were supervised by the Defense Atomic Support Agency (DASA) Weapons Effects Test Group. These tests were designed to collect information on weapons effects, such as the electromagnetic pulse, prompt and residual radiation, and thermal radiation. The experiments also tested the effects of low-yield detonations on structures and on aircraft in flight. ...

'The event involving the largest number of DoD participants was Shot LITTLE FELLER I, the fourth DOMINIC II test. LITTLE FELLER I was a stockpile DAVY CROCKETT tactical weapon, fired as part of Exercise IVY FLATS. This training exercise consisted of an observer program and a troop maneuver. Observers in bleachers about 3.5 kilometers southwest of ground zero wore protective goggles while they watched the detonation. Maneuver troops forward of the observation site were in trenches during the detonation. Five personnel from the IVY FLATS maneuver task force launched the weapon from a rocket launcher mounted on an armored personnel carrier. LITTLE FELLER I detonated on target, 2,853 meters from the firing position. ...

'The DOMINIC II event involving the largest number of DoD projects was Shot SMALL BOY. Originally scheduled for 31 DoD projects, the shot ultimately included 63 DoD projects, as well as four Civil Effects and 31 AEC projects. Shot SMALL BOY had initially been planned as the one detonation of Operation DOMINIC II. The primary purpose of the detonation was to provide information on electromagnetic pulse effects. Headquarters, DASA, consequently assigned Harry Diamond Laboratories, which had collected electromagnetic pulse data at Operation PLUMBBOB (1957), to provide overall technical direction for DoD programs. Program 6, Electromagnetic Effects, was given priority over the other programs, which were conducted according to strict guidelines designed to assure noninterference with Program 6 objectives. [Emphasis added: note that SMALL BOY was primarily an EMP effects test, which indicates the priority being given to EMP in 1962!]'

OLDER MATERIAL (NEEDS EDITING):

One of the immediately perplexing things about the radiated EMP or radioflash signal from a nuclear explosion in the American treatment e.g. DNA-EM-1 chapter 7 is the talk of a 'source region' or 'deposition region' boundary, symbolized by R0, which doesn't actually exist in the physical world! The radiation fields drop off gradually so there is no natural limiting distance! This problem is resolved clearly by an arbitrary definition of the radius, as explained by Glasstone and Dolan, The Effects of Nuclear Weapons, 3rd ed., 1977, page 535: 'the deposition region does not have a precise boundary, but R0 is taken as the distance that encloses a volume in which the [peak air] conductivity is 10-7 mho [1 mho = 1 S in SI units] per metre or greater.'

The Capabilities of Nuclear Weapons Chapter 7, 1978 Change 1 page 7-7 et seq says that the radiated EMP electric field strength at this radius varies from 1,300 v/m for 1 kt surface burst to 1,670 v/m for 10 Mt surface burst, but "For most cases, a value of 1,650 volts per metre may be assumed. At ranges along the surface beyond R0, the peak radiated electric field varies inversely with the distance from the burst." Dolan then gives examples of surface burst radiated EMP: for 100 kt at a ground distance equal to the deposition region radius for this yield of R0 = 5.8 km you get about 1,650 volts/metre radiated EMP, and for 1 Mt at the deposition region radius R0 = 7.2 km, you also get about 1,650 volts/metre. Scaling inversely as distance, Dolan shows that at 10 km ground range, the radiated EMP peak electric field is about 950 v/m for 100 kt surface burst and about 1,200 v/m for 1 Mt. These are small electric fields from the perspective of EMP damage concerns, although they will induce large current pulses in long conductors. One important point to notice is that the radiated EMP from a surface burst is vertically polarized (a horizontally propagating transverse wave) so it poses a threat to long vertical conductors like radio masts, not to long horizontal conductors like telephone or power cables. The source region radial electric field is horizontally polarized (radially directed, not transverse) so it causes the threat to horizontal cables and power lines, etc. In the event of a high altitude nuclear explosion, the EMP radiated downwards is polarized predominantly in the horizontal direction, so it is picked up in horizontal cables and power lines. The radiated EMP from a surface burst, apart from being relatively weak, has the wrong polarization to cause significant pick up in horizontally laid conductors, so it is not a primary damage threat.

As for the waveform of the radiated EMP from a surface burst, it's easy to get this by integrating the upward (vertical vector) component of all the Compton (electron) currents in the air above the detonation, and using this net vertical current to calculate the radiated waveform of the EMP, just as you can calculate the radiated radio waveform from a known electron current applied in a vertical antenna or aerial by a radio transmitter set. Radio waves are radiated whenever a net electric current varies with time. I.e., whenever there is a net acceleration of electric charge, as given by Larmor's simple formula for radiated power: P = q2a2/{6*Pi*Permittivity of free space*c3} watts, where q is charge and a is the acceleration of that charge. However, Larmor's formula needs relativistic corrections (see equation 8 in Mario Rabinowitz's paper) which for circular (constant pitch and radius) deflection makes their velocity vector perpendicular to their acceleration vector, giving v * a = 0, so the relativistic correction to Larmor's formula amounts to multiplying it by {gamma}4, where {gamma} = [1 - (v/c)2]-1/2, the relativistic Lorentz-FitzGerald factor:

P = q2a2[1 - (v/c)-2]2/{6*Pi*Permittivity of free space*c3} watts,

This is confirmed by Professor Bridgman's Introduction to the Physics of Nuclear Weapons Effects (1st edition, Defense Threat Reduction Agency (DTRA/DTRIAC), July 2001), equation 11-3 on page 376, where acceleration a2 in the Larmor non-relativistic equation is replaced by {gamma}2(dp/dt)2/m2, where m is the electron's rest mass, with the note that the relativistic "gamma" factor must also be included in the expression for momentum p (due to the relativistic mass-increase which enhances that momentum at relativistic velocities). This relativistic correction, {gamma}2(dp/dt)2/m2 = {gamma}4(dv/dt)2 = {gamma}4a2, which is identical to Rabinowitz's relativistic correction to Larmor's formula. (Bridgman cites the source of his formula 11-3 as John David Jackson's Classical Electrodynamics, Wiley, New York, 2nd ed., 1975, pages 660-5.)

Because a bigger bomb releases more gamma radiation which causes the vertical Compton current, it is clear that the time taken for the vertical electron Compton current to be cancelled out by the conduction current (the return of mobile negative electrons to inert heavy positive ions of air molecules, caused by the radial electric field caused by the charge separation which attracts them back to the ions) gets longer for bigger bombs. At 10 km from a 1 Mt surface burst, it appears that the peak electric field should be -1,200 v/m at 0.5 microsecond (the negative sign comes from the fact that this is due to the Compton current, which is a net upward flow of electrons, i.e., equivalent to a downward flow of conventional electric current which is defined after Franklin as flowing from positive to negative not the other way around as really occurs) which rapidly drops to zero at 1.3 microseconds and is then followed by a reversed electric field peaking at +110 v/m at about 2 microseconds (the positive sign being due to this field being caused by the net flow of electrons downward, returning to ions).

In the case of a 1 Mt air burst in sea level air, the radiated electric field at 10 km range is much weaker because the EMP is due to the air density gradient, but it has the same general nature as for a surface burst, peaking first in the negative direction with -19 v/m at 0.75 microsecond due to the Compton current, followed by a positive peak of +23 v/m at 3 microseconds due to the conduction current (returning electrons).

There is some nuclear test data available in declassified preliminary shot reports on the U.S. Department of Energy Marshall Islands Historical Documents database, giving some values for the radiated EMP electric field strengths measured for various Ivy (1952), Castle (1954) and Redwing (1956) Pacific nuclear tests, which can be compared to the theoretical predictions in DNA-EM-1:

King, 500 kt pure fission low air burst (451 m altitude): peak EMP at Maui (4200 km distance) = 1.0 v/m

Romeo, 11 Mt 64% fission surface burst: peak EMP at 320 km distance = 21 v/m

Koon, 110 kt 91% fission surface burst: peak EMP at 320 km distance = 15 v/m

Union, 6.9 Mt 72% fissin surface burst: peak EMP at 320 km distance = 40 v/m

Yankee, 13.5 Mt 52% fission surface burst: peak EMP at 320 km distance = 34 v/m

Nectar, 1.69 Mt 80% fission surface burst: peak EMP at 23 km distance = 775 v/m

Zuni, 3.53 Mt 15% fission surface burst: peak EMP at 334 km distance = 14.4 v/m

Flathead, 365 kt 73% fission surface burst: peak EMP at 343 km distance = 17.0 v/m; also the measured peak EMP at 525 km distance was 6.8 v/m

Osage, 1.7 kt 100% fission low air burst (204 m altitude): peak EMP at 13.5 km distance = 26 v/m

Seminole, 13.7 kt 80% fission burst inside a large water tank: peak EMP at 33.4 km distance = 0 v/m (no detectable EMP, due to the water blanket above the bomb in the water tank absorbing the nuclear radiation and preventing any effective EMP being generated)

The references for these data cited in the preliminary shot reports on that database are:

M.H. Olseon, Operation Castle, Project 7.1, Electromagnetic Radiation Calibration, weapon test report WT-930, June 1958, U.S. Armed Forces Special Weapons project, Secret-Restricted Data, and

Charles J. Ong, Analysis of Electromagnetic Pulse Produced by Nuclear Explosions, Operation Redwing, Project 6.5, 1956, Secret-Restricted Data.

The Dolan DNA-EM-1 manual chapter 7 is fairly accurate for Koon, Union, Yankee, and Nectar but generally over-estimates the peak electric field from these nuclear tests by about a factor of two, partly because the assessment of prompt gamma radiation output it uses is based on more efficient nuclear weapons with thinner casings than the 1950s test devices, and partly because at a distance of 320 km from a nuclear bomb there is some attenuation in the EMP radiation due to the ocean conductivity and the atomsphere, in addition to the purely geometrical effect of the inverse-of-distance fall off. The formula given by Dolan is probably only accurate within 50 km of a surface burst, and exaggerates the EMP at greater distances.

The main purpose of discussing the radiated EMP from a surface burst is, as already emphasised, not concerned with EMP damage but with the detection and identification of nuclear explosions by computerised radio receivers such as the British 'AWDREY' (atomic weapons detection, recognition, and estimation of yield) installations which have been developed since 1968 by the Atomic Weapons Establishment at Aldermaston and can automatically use the EMP to immediately detect and identify the characteristics of a nuclear explosion. To cover Britain, there were 13 AWDREY installations each with 75 miles operational range. The direction of the EMP measured by any two AWDREY's allowed the coordinates of the burst to be determined, while the time measured from the EMP radioflash to the final maximum pulse in the visible light flash of the explosion was used to accurately determine the total energy yield of the detonation. To discriminate against lightning and other false alarms, a detonation was only recorded by the instrument if there was both an EMP and a visible light flash with the nuclear explosion signature waveforms.


Similar systems are now installed in military satellites to detect and identify nuclear explosions, giving immediate warnings which can be used to predict the subsequent fallout hazard to be expected. Because the EMP differs substantially in a surface burst, an air burst, and a high altitude burst, the waveform of the EMP delivers some information on the type of burst as well as the fission and total yield. It also indicates the design of the bomb, since multi-staged thermonuclear weapons produce an EMP from each stage, although the secondary stage EMP is reduced to a known degree by the ionized conductivity of the air created by the primary stage. Therefore, EMP is a very useful signature of a nuclear explosion for delivering diagnostic information that can immediately be processed by suitably designed computer systems to be used for civil defence for predict fallout hazards downwind.

“In every listening radio set in the central Pacific came a loud click, caused by the electro-magnetic radiation... on Christmas Island John Challens heard the click and exclaimed jubilantly – ‘It worked!’” - Air Vice Marshal W. E. Oulton, nuclear test Task Force Commander, Christmas Island Cracker, Thomas Harmsworth, London, 1987, p. 326. (This was the 15 May 1957 EMP from British nuclear test Short Granite, 300 kt air burst at 2.2 km altitude.)

“At Christmas Island... the [radio] listeners heard the click of the radio-flash superimposed on the relay of Robert's commentary and sent word to London of another successful test.” - Air Vice Marshal W. E. Oulton, nuclear test Task Force Commander, Christmas Island Cracker, Thomas Harmsworth, London, 1987, p. 346. (This was the 31 May 1957 EMP from British nuclear test Orange Herald, a 700 kt air burst at 2.4 km altitude.)

On the effects of such EMP on radios, it's worth again referring to a study of radiated EMP effects on portable transistor radio receivers, made by A. D. Perryman of the U.K. Home Office Scientific Advisory Branch and published on page 25 of the originally 'Restricted' Home Office publication Fission Fragments, issue 21 (April 1977, edited by M. J. Thompson of the Home Office Emergency Services Division, Horseferry House, Dean Ryle Street, London):



'... SAB carried out a limited programme of tests in which four popular brands of transistor radio were exposed in an EMP simulator to threat-level pulses of electric field gradient about 50 kV/m. ... All these sets worked on dry cells [internal batteries] and had internal ferrite aerials for medium and long wave reception. In addition, sets 2, 3 and 4 had extendable whip aerials for VHF/FM reception. ...

'During the tests the receivers were first tuned to a well-known long-wave station and then subjected to a sequence of pulses in the EMP simulator. This test was repeated on the medium wave and VHF bands. Set 1 had no VHF facility and was therefore operated only on long and medium waves.

'The results of this experimentation showed that transistor radios of the type tested, when operated on long or medium waves, suffer little loss of performance. This could be attributed to the properties of the ferrite aerial and its associated circuitry (e.g. the relatively low coupling efficiency). Set 1 [the set with only a short internal ferrite rod aerial, and no long external extensible aerial], in fact, survived all the several pulses applied to it, whereas sets 2, 3 and 4 all failed soon after their whip aerials were extended for VHF reception. The cause of failure was identified as burnout of the transistors in the VHF RF [radio frequency] amplifier stage. Examination of these transistors under an electron microscope revealed deformation of their internal structure due to the passage of excessive current transients (estimated at up to 100 amps).

'Components other than transistors (e.g. capacitors, inductors, etc.) appeared to be unaffected...

'From this very limited test programme, transistor radios would appear to have a high probability of survival in a nuclear crisis when operated on long and medium bands using the internal ferrite aerial. If VHF ranges have to be used, then probably the safest mode of operation is with the whip aerial extended to the minimum length necessary to give just audible reception with the volume control fully up.'

This experiment indicates that battery operated transistor radio receivers working on internal ferrite rod antennas will be undamaged by EMP even in the worst case scenario (a high yield burst at high altitude giving on the order 50 kv/m peak radiated electric field). It was known from 1950s nuclear tests of course that vacuum tube/thermonic valve electronics in battery powered radios with short aerials were immune from EMP damage, but they work with higher power than most transistors so they can better withstand EMP. There is a clip of the broadcast live TV transmission of an early Nevada test on the DVD Trinity and Beyond which shows the very brief EMP interference on the transmitted signal from an electronic TV camera and transmitting station in a trench close to a nuclear detonation (there is a click on the audio and a brief loss of the transmitted video signal as you get when an analogue TV is not tuned to a transmitter). No permanent damage was produced in this old vacuum tube electronic equipment since it was relatively invulnerable and not connected to long conductors coming from the radiation deposition region of the explosion. Such self-contained battery-operated electronic equipment cannot be damaged by surface burst EMP beyond the blast damaged area of a nuclear explosion.

The long-range EMP damage threat from surface bursts, in other words, is essentially due not to battery operated items with short conductors, but is due to the pick-up in long conductors being channelled great distances by such conductors until finally entering mains-operated electronics and electrical systems, or equipment coupled to twisted pair telephone lines (not optical fibre, which is an insulator). The main problem from EMP will be the loss of mains electrical power. Laptop computers working on battery power with only the short internal wireless aerials for 802.11g microwave frequency networks (or for bluetooth networking) are unlikely to be damaged by the 1,000 v/m or lower order of magnitude of radiated EMP from any surface burst outside the blast damaged area. Even in the case of equipment connected to mains power, generally people will protect mains operated computers with power surge cutouts which can stop the natural EMP surges from nearby lightning bolts. The rise time of a transient in a long power cable is on the order of a microsecond or less, which is shorter than occurs with natural lightning, but modern power surge protectors are capable of providing protection against explosion EMP.

UPDATE ON HIGH ALTITUDE BURST EMP FIELD STRENGTH PREDICTIONS

An earlier post on this blog, 'EMP radiation from nuclear space bursts in 1962' (which has now been corrected and updated with the new information), documents the vital scientific data concerning high altitude nuclear test EMP from American and Russian nuclear tests in 1962 (and some previous tests in 1958 that were not properly measured due to a theory by Bethe that led to instruments being set up to detect a radiated EMP with the wrong polarization, duration and strength). That post still contains valuable data and the motivation for civil defence, although a great deal has changed and much new vital technical information on high altitude EMP predictions has come to light since that post was written.

Dr Conrad Longmire, as stated in that post, discovered the vital 'magnetic dipole' EMP mechanism for high altitude explosions (quite different to Bethe's 'electric dipole' predictions from 1958) after he saw Richard Wakefield's curve of EMP from the 9 July 1962 Starfish test of 1.4 Mt (1.4 kt of which was prompt gamma rays) at 400 km altitude.

'Longmire, a weapons designer who worked in [Los Alamos] T Division from 1949 to 1969 and currently is a Lab associate, played a key role in developing an understanding of some of the fundamental processes in weapons performance. His work included the original detailed theoretical analysis of boosting and ignition of the first thermonuclear device. Longmire ... wrote Elementary Plasma Physics (one of the early textbooks on this topic). He also became the first person to work out a detailed theory of the generation and propagation of the [high altitude magnetic dipole mechanism] electromagnetic pulse from nuclear weapons.'

Starfish was however not the first suitable measured curve of the magnetic dipole EMP, which was obtained from the 2 kt Yucca test in 1958 and described in detail in 1959 on page 347 of report ITR-1660-(SAN), but no EMP damage occurred from that test and so nobody worried about the size and shape of that EMP which was treated as an anomaly: 'Shot Yucca ... [EMP] field strength at Kusaie indicated that deflection at Wotho would have been some five times the scope limits... The wave form was radically different from that expected. The initial pulse was positive, instead of the usual negative. The signal consisted mostly of high frequencies of the order of 4 Mc, instead of the primary lower-frequency component [electric dipole EMP] normally received ...' Longmire's secret lectures on the magnetic dipole EMP mechanism were included in his April 1964 Los Alamos National Laboratory report, LAMS-3073. The first open publication of Longmire's theory was in the 1965 paper 'Detection of the Electromagnetic Radiation from Nuclear Explosions in Space' in the Physical Review (vol. 137B, p. 1369) by W. J. Karzas and Richard Latter of the RAND Corporation, which is available in RAND report format online as report AD0607788. (The same authors had perviously in October 1961 written a report on Bethe's misleading 'electric dipole' EMP mechanism - predicting incorrectly an EMP peak electric field of only 1 volt/metre at 400 km from a burst like Starfish instead of 50,000 volts/metre which occurs in the 'magnetic dipole' mechanism - called 'Electromagnetic Radiation from a Nuclear Explosion in Space', AD0412984.) It was only after the publication of this 1965 paper that the first real concerns about civil defence implications of high altitude bursts occurred.

The next paper which is widely cited in the open literature is Longmire's, 'On the electromagnetic pulse produced by nuclear explosions' published in the January 1978 issue of IEEE Transactions on Antennas and Propagation, volume 26, issue 1, pp. 3-13. That paper does not give the EMP field strength on the ground as a function of the high altitude burst yield and altitude, but it does give a useful discussion of the theoretical physics involved and also has a brief history of EMP. In the earlier post on this blog, I extracted the vital quantitative information from a March 1975 masters degree thesis by Louis W. Seiler, Jr., A Calculational Model for High Altitude EMP, AD-A009208, pages 33 and 36, which had gone unnoticed by everyone with an interest in the subject. I also obtained Richard Wakefield's EMP measurement from the Starfish test which is published in K. S. H. Lee's 1986 book, EMP Interaction, and added a scale to the plot using a declassified graph in Dolan's DNA-EM-1, Chapter 7. However, more recent information has now come to light.

The reason for checking these facts scientifically for civil defence is that the entire EMP problem will be dismissed by critics as a Pentagon invention for wasting time because of the alleged lack of EMP effects evidence or because of excessive secrecy being used as an excuse to not bother presenting the facts to the public in a scientific manner, with evidence for assertions ('extraordinary claims require extraordinary evidence' - Carl Sagan).

The latest information on EMP comes from a brand new (October 24, 2008) SUMMA Foundation database of EMP reports compiled by Dr Carl E. Baum of the Air Force Weapons Laboratory and hosted on the internet site of the Electrical and Computer Engineering Department of the University of New Mexico:

'Announcements. Update: Oct. 24, 2008 - We are pleased to announce that many of the unclassified Note Series are now on-line and is being hosted by the Electrical and Computer Engineering Department at the University of New Mexico. More notes will be added in the coming months. We appreciate your patience.'

The first of these reports that needs to be discussed here is Note 353 of March 1985 by Conrad L. Longmire, 'EMP on Honolulu from the Starfish Event'. Longmire notes that: 'the transverse component of the geomagnetic field, to which the EMP amplitude is approximately proportional, was only 0.23 Gauss. Over the northern U.S., for some rays, the transverse geomagnetic field is 2.5 times larger.' For Starfish he uses 400 km burst altitude, 1.4 Mt total yield and 1.4 kt (i.e. 0.1%) prompt gamma ray yield with a mean gamma ray energy of 2 MeV. Honolulu, Hawaii (which was 1,450 km from the Starfish bomb detonation point 400 km above Johnston Island) had a magnetic azimuth of 54.3 degrees East and a geomagnetic field strength in the source region of 0.35 gauss (the transverse component of this was 0.23 Gauss).

Longmire calculates that the peak radiated (transverse) EMP at Honolulu from Starfish was only 5,600 volts/metre at about 0.1 microsecond, with the EMP delivering 0.1 J/m2 of energy: 'The efficiency of conversion of gamma energy to EMP in this [Honolulu] direction is about 4.5 percent.' Longmire's vital Starfish EMP graph for Honolulu is shown below:
Longmire points out that much higher EMP fields occurred closer to the burst point, concluding on page 12: 'We see that the amplitude of the EMP incident on Honolulu [which blew the sturdy electric fuses in 1-3% of the streetlamps on the island] from the Starfish event was considerably smaller than could be produced over the northern U.S. ... Therefore one cannot conclude from what electrical and electronic damage did not occur in Honolulu that high-altitude EMP is not a serious threat.

'In addition, modern electronics is much more sensitive than that in common use in 1962. Strings of series-connected street lights did go out in Honolulu ... sensitive semiconductor components can easily be burned out by the EMP itself, 10-7 Joules being reportedly sufficient.'

The next vitally important report deserving discussion here in Dr Baum's collection is K. D. Leuthauser's A Complete EMP Environment Generated by High-Altitude Nuclear Bursts, Note 363, October 1992, which gives the following vital data (notice that 10 kt prompt gamma ray yield generally corresponds to a typical thermonuclear weapon yield of about 10 megatons):





Quotations from some of the Theoretical Notes on EMP in Dr Carl E. Baum's database:

Theoretical Note 368:

Conrad L. Longmire, Justification and verification of High-Altitude EMP Theory, Part 1, Mission Research Corporation, June 1986, pages 1-3:


'Over the 22 years since the first publication of the theory of High-Altitude Electromagnetic Pulse (HEMP), there have been several doubters of the correctness of that theory. ... commonly, it has been claimed that the HEMP is a much smaller pulse than our theory indicates and it has been implied, though not directly stated in writing, that the HEMP has been exaggerated by those who work on it in order to perpetuate their own employment. It could be noted that, in some quarters, the disparagement of HEMP has itself become an occupation. ...

'... One possible difficulty with previous papers is that they are based on solving Maxwell's equations. While this is the most legitimate approach for the mathematically inclined reader, many of the individuals we think it important to reach may not feel comfortable with that approach. We admit to being surprised at the number of people who have wanted to understand HEMP in terms of the fields radiated by individual Compton recoil electrons. Apparently our schools do a better job in teaching the applications of Maxwell's equations (in this case, the cyclotron radiation) than they do in imparting a basic understanding of those equations and how they work. ...

'The confidence we have in our calculations of the HEMP rests on two circumstances. The first of these is the basic simplicity of the theory. The physical processes involved, e.g., Compton scattering, are quite well known, and the physical parameters needed in the calculations, such as electron mobility, have been measured in relevant laboratory experiments. There is no mathematical difficulty in determining the solution of the outgoing wave equation, or in understanding why it is an accurate approximation. ...

'... the model of cycotron radiation from individual Compton recoil electrons is very difficult to apply with accuracy to our problem because of the multitudinous secondary electrons, which absorb the radiation emitted by the Compton electrons [preventing simple coherent addition of the individual fields from accelerated electrons once when the outgoing EMP wave front becomes strong, and therefore causing the radiated field to reach a saturation value in strong fields which is less than the simple summation of the individual electron contributions]. ...

'The other circumstance is that there is experimental data on the HEMP obtained by the Los Alamos Scientific Laboratory in the nuclear test series carried out in 1962. In a classified companion report (Mission Research Corp. report MRC-R-1037, November 1986) we present calculations of the HEMP from the Kingfish and Bluegill events and compare them with the experimental data. These calculations were performed some years ago, but they have not been widely circulated. In order to make the calculations transparently honest, the gamma-ray output was provided by Los Alamos, the HEMP calculations were performed by MRC and the comparison with the experimental data was made by RDA. The degree of agreement between calculation and experiment gives important verification of the correctness of HEMP theory.'

As stated in this blog post, Theoretical Note TN353 of March 1985 by Conrad L. Longmire, EMP on Honolulu from the Starfish Event calculates that the peak radiated (transverse) EMP at Honolulu from Starfish delivered only 0.1 J/m2 of energy: 'The efficiency of conversion of gamma energy to EMP in this [Honolulu] direction is about 4.5 percent.'

He and his collaborators elaborate on the causes of this inefficiency problem on page 24 of the January 1987 Theoretical Note TN354:

'Contributing to inefficiency ... only about half of the gamma energy is transferred to the Compton recoil electron, on the average [e.g., the mean 2 MeV prompt gamma rays create 1 MeV Compton electrons which in getting slowed down by hitting molecules each ionize 30,000 molecules releasing 30,000 'secondary' electrons, which uses up energy from the Compton electron that would otherwise be radiated as EMP energy; also, these 30,000 secondary electrons have random directions so they don't contribute to the Compton current, but they do contribute greatly to the rise in air conductivity, which helps to short-out the Compton current by allowing a return 'conduction current' of electrons to flow back to ions].'

Longmire also points out that Glasstone and Dolan's Effects of Nuclear Weapons pages 495 and 534 gives the fraction of bomb energy radiated in prompt gamma rays as 0.3 %. If this figure is correct, then 10 kt prompt gamma ray yield is obviously produced by a 3.3 megatons nuclear explosion. However, the Glasstone and Dolan figure of 0.3 % is apparently just the average of the 0.1 % to 0.5 % range specified by Dolan in Capabilities of Nuclear Weapons, Chapter 7, Electromagnetic Pulse (EMP) Phenomena, page 7-1 (Change 1, 1978 update):

'Briefly, the prompt gammas arise from the fission or fusion reactions taking place in the bomb and from the inelastic collisions of neutrons with the weapon materials. The fraction of the total weapon energy that may be contained in the prompt gammas will vary nominally from about 0.1% for high yield weapons to about 0.5% for low yield weapons, depending on weapon design and size. Special designs might increase the gamma fraction, whereas massive, inefficient designs would decrease it.'

UPDATES ON FALLOUT

Useful U.S. Naval Radiological Defense Laboratory nuclear test fallout information now available from the Journal of the Atmospheric Sciences as free PDF files:

CLOSE-IN FALLOUT
W. W. Kellogg, R. R. Rapp, and S. M. Greenfield
Journal of the Atmospheric Sciences Volume 14, Issue 1 (February 1957) pp. 1–8
[ PDF (655K) ]

ATMOSPHERIC REACTIONS OF SLURRY DROPLET FALLOUT
N. H. Farlow
Journal of the Atmospheric Sciences Volume 17, Issue 4 (August 1960) pp. 390–399
[ PDF (833K) ] This is a very important analysis for the situation of water surface bursts (see chapter 5 of Capabilities of Nuclear Weapons linked above for a detailed discussion of the formation and dose rates due to fallout in ocean water surface bursts) and shows clearly how the salt slurry fallout from ocean water surface bursts occurs: the water taken up in the cloud is frozen solid at high altitudes and partially evaporates as it falls through warmer layers of air near the ground while being deposited. Although sea water is 3.5 % salts by mass, the deposited fallout can contain much higher concentrations and even a slurry of salt crystals (if the salt concentration exceeds the saturation concentration of salt in water) due to evaporation of the water. This fallout contains relativity soluble ionic fission products which can soak in to surfaces and become chemically attached to molecules in contaminated materials, making subsequent decontamination efforts less effective than is the case with the insoluble glass spheres of fallout created by a land surface burst on silicate based soil. Such fallout needs to be removed from surfaces before it soaks in and dries off, such as by a continuous water spray (American ships at nuclear tests used their fire-hosing sprinkler systems on deck during fallout to prevent deposition of slurry fallout, which was washed down drains and off the decks as it landed).

A THEORY FOR CLOSE-IN FALLOUT FROM LAND-SURFACE NUCLEAR BURSTS
Albert D. Anderson
Journal of the Atmospheric Sciences Volume 18, Issue 4 (August 1961) pp. 431–442
[ PDF (1.03M) ]

Report Date : 27 MAY 1960
Reply
Albert D. Anderson
Journal of Applied Meteorology Volume 1, Issue 3 (September 1962) pp. 434–436
[ PDF (222K) ]

Larson, K. H. ; Neel, J. W. ; Hawthrone, H. A. ; Mork, H. M. ; Rowland, R . H., Distribution, Characteristics, and Biotic Availability of Fallout, Operation Plumbbob, CALIFORNIA UNIV LOS ANGELES LAB OF NUCLEAR MEDICINE AND RADIATION BIOLOGY, OCT 1957, 613 pp., ADA077509, 26.5 MB PDF file: http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA077509&Location=U2&doc=GetTRDoc.pdf

RAND Corporation 1950s fallout research, analyzing the rocket determination of radioactivity within the mushroom cloud from the 1956 Redwing-Zuni test (3.53 Mt surface burst, 15 % fission yield, Bikini Atoll): http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD337920&Location=U2&doc=GetTRDoc.pdf

The original AFSWP (U.S. Armed Forces Special Weapons Project) fallout model used in a simplified format in early editions of The Effects of Nuclear Weapons is D. C. Borg, et al., Radioactive Fall-Out Hazards From Surface Bursts of Very High Yield Nuclear Weapons, AFSWP-507, May 1954. This report uses the Mike fallout pattern for upwind and ground zero area fallout (of importance because this fallout covers the blast damaged area), but uses Bravo fallout data for the downwind fallout area. The yield scaling system is to scale both dose rates and distances by the cube-root of the total weapon power, and to scale dose rates directly in proportion to the fission yield fraction. For kiloton yields, The Effects of Nuclear Weapons used the Nevada Jangle-Sugar 1.2 kt burst fallout pattern as the basis for scaling fallout instead of the Mike and Bravo fallout patterns which were only used for megaton yields. Discussing this data and prediction system is controversial. On the one hand, the 1950s fallout pattern data is empirical scientific data that has not been superseded, so it is still valid. On the other hand, some would argue that computerized predictions of fallout provide a more "modern" and "sophisticated" basis for fallout predictions.

The Americans have also published online a declassified report with the fallout patterns from some British and French nuclear tests, ADA956123, which unfortunately does not contain the best data. There are far more useful declassified fallout patterns for the British Hurricane (1952), Totem (1953), Buffalo (1956) and Antler (1957) test series shots available in file series DEFE 16 and I think ADM 285/167 and ADM 285/169 at the U.K. National Archives in Kew. The most important fallout pattern is the Hurricane nuclear test since it was a very shallow underwater burst inside a ship. The American version is unclear:





Above: the fallout pattern version given to me for the British Hurricane 25 kt very shallow underwater test (exploded 2.7 m below the waterline inside the hull of HMS Plym a 1,370-ton River class frigate anchored in 12 m of water 350 m offshore, creating a saucer-shaped crater on the seabed 6 m deep and 300 m across), kindly supplied by Aldermaston in 1995 after it was declassified, compared to the American version.

U.S. Naval Radiological Defense Laboratory summary of fallout properties: http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD623485&Location=U2&doc=GetTRDoc.pdf (This is used on Glasstone and Dolan 1977 but unfortunately this report excludes classified data on which tests the fallout is from, so is non-quantitative and vague, and is also very sketchy in what it does present which is just a tiny example from extensive classified data, and by no means a good summary compared to reports by Dr Carl F. Miller and others, see for example http://glasstone.blogspot.com/2007/03/dr-carl-f-millers-fallout-and.html and other posts on this blog.)

Additionally, Dr Carl F. Miller's major report theoretically calculating the fractionation of fission products by fireball heat and the effect of this upon the fission product composition of fallout particles and therefore the decay rate of the fallout radiation downwind, AD0241240, 'A THEORY OF FORMATION OF FALLOUT FROM LAND-SURFACE NUCLEAR DETONATIONS AND DECAY OF THE FISSION PRODUCTS' (U.S. NAVAL RADIOLOGICAL DEFENSE LAB., SAN FRANCISCO), 27 May 1960, is now available from DTIC as a free PDF download at: http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD241240&Location=U2&doc=GetTRDoc.pdf

“The foliage making up the crowns [upper branches and leaves] of the trees, while it has a high probability of being exposed to the full free-field radiation environment from air bursts... may, however, materially reduce the exposure of the forest floor by generating quantities of smoke and steam, as well as by direct shading.” - Philip J. Dolan, Capabilities of Nuclear Weapons, U.S. Defense Nuclear Agency, 1978 revision, Secret – Restricted Data, Chapter 15, paragraph 15-9.

“Fuels seldom burn vigorously, regardless of the wind conditions, when fuel moisture content exceeds about 16 percent. This corresponds to an equilibrium moisture content for a condition of 80 percent relative humidity. Rainfall of only a fraction of an inch will render most fuels temporarily nonflammable and may extinguish fires in thin fuels... Surface fuels in the interior of timber stands are exposed to reduced wind velocities; generally, these fuels retain their moisture as a result of shielding from the wind and shading from sunlight by the canopy.” - Philip J. Dolan, Capabilities of Nuclear Weapons, U.S. Defense Nuclear Agency, 1978 revision, Secret – Restricted Data, Chapter 15, page 15-60.



The sixth Chinese nuclear test, their first Teller-Ulam (separate staged thermonuclear) design with a fusion-boosted U-235 primary and a U-238 pusher around the fusion stage, yielding 3.3 Mt on 17 June 1967. It was dropped from Hong-6 (Chinese manufactured Tu-16) and was parachute-retarded with detonation at 2,960 meters altitude:



"Bomb away!

"The hydrogen bomb gently falls toward the ground. It will be exploding 2900 meters above ground level.

"9, 8, 7, 6, 5, 4, 3, 2, 1, detonate!

"June 17th 1967, at 8:20am, our nation's first hydrogen bomb achieved success!

"A brightness appears by the fireball. It is indeed the sun.

"From the first atomic explosion to the first thermonuclear explosion, it took USA 7 years 3 months, took the Soviet Union 4 years, took the United Kingdom 4 years 7 months. Our nation worked just over 2 years to achieve the momentus leap from atomic to hydrogen.

"We now know in 1952, USA exploded a 65 ton, 3 story high apparatus. When the Soviet Union air dropped its first hydrogen bomb in 1953, the explosive force was 400 kilotons. Our nation during this test used a small size, low weight, megaton level bomb to destroy a designated target. This proves once again the Chinese people can do what foreigners can do, and we can do it better!

"Looking towards the enormous mushroom cloud rising into the sky, Marshal Lie exclaimed, three million tons, enough, that's quite enough!"


Above: stills from the film, showing the expanding fireball of the 3.3 Mt Chinese air burst 17 June 1967; the bomb vapour blobs from the casing and debris of the bomb itself initially overtake the slowly-expanding early X-ray sphere (which expands merely due to the diffusion of soft X-rays that only travel a small distance before being absorbed in cold air, and re-radiating), and splash against the back of the compressed air shock wave forming in the fireball, creating a very spectacular 'star filled universe' effect before disappearing as the front of the air shock wave becomes the radiating surface, and forms behind it an opaque shield of nitrogen dioxide which absorbs light radiation coming from the interior of the fireball. (Brode discusses this effect in the Annual Review of Nuclear Science, vol. 18, 1968.)


Update:

It is of interest that the RAND Corporation site list a 1958 paper co-authored by Nobel Laureate Murray Gell-Mann who was a consultant to the RAND Corporation in the 1950s:

The Electromagnetic Signal from Nuclear Explosions at Sea Level. D(L)-8668, 1958, Christy, R. F., Murray Gell-Mann

WHAT IS NUKEGATE? The Introduction to "Nuclear Weapons Effects Theory" (1990 unpublished book), as updated 2025

R. G. Shreffler and W. S. Bennett, Tactical nuclear warfare , Los Alamos report LA-4467-MS, originally classified SECRET, p8 (linked HE...