The effects of nuclear weapons. Credible nuclear deterrence, debunking "disarm or be annihilated". Realistic effects and credible nuclear weapon capabilities for deterring or stopping aggressive invasions and attacks which could escalate into major conventional or nuclear wars.

Tuesday, August 25, 2009

Our Nuclear Future: Facts, Dangers, and Opportunities, by Edward Teller and Albert L. Latter

Our Nuclear Future: Facts, Dangers, and Opportunities, by Edward Teller and Albert L. Latter (Criterion Books, New York, 1958), page 139:

"It is generally believed that the First World War was caused by an arms race. For some strange reason most people forget that the Second World War was brought about by a situation which could be called a race in disarmament. The peace-loving and powerful nations divested themselves of their military power. When the Nazi regime in Germany adopted a program of rapid preparation for war, the rest of the world was caught unawares. At first they did not want to accept the fact of this menace. When the danger was unmistakable, it was too late to avert a most cruel war, and almost too late to stop Hitler short of world conquest."

Above: 9.3 megatons Hardtack-Poplar fireball in 1958. This photo has only been recently released with the name of the test. Maybe the proximity of the aircraft (which survived) creates the wrong (not so doomsday-like) impression?

Above: a different regime of nuclear effects phenomena. Colour photos now available of the Teak fireball and surrounding red shock wave air glow. The bomb was 3.8 Mt (50% fission yield fraction) detonated at 77 km altitude nearly over Johnston Island, and was photographed in 1958 from a mountain top on Maui, 794 nautical miles away. As we mentioned in a previous post, Teller and Latter related the case of the Plumbbob-John air burst of 18 July 1957, where five men stood at ground zero (directly below the rocket carried bomb burst) without injury (although they were not looking directly at the fireball at zero time, or they would have received retinal burns). Teak was a similar case: proving that nuclear weapons can be used (for instance as high altitude bursts to destroy incoming missiles) without hazards, if they are designed to minimize prompt gamma ray output and thus EMP radiation (this can be done by the use of clean nuclear weapons with suitable tamper materials that will minimize the high-energy secondary gamma ray yield when hit by neutrons).

Teller and Latter explain that radiological warfare is a benefit compared to the carnage of using conventional weapons

"The lifetime of the radioactive material may be long enough to give an opportunity to the people to escape from the contaminated area [longer half lives mean that the chance of a radioactive atom decaying in any given second is lower, so the specific activity is lower; e.g. if you have N radioactive atoms with a half life of T time units, then the decay rate is simply (lne2)*N/T ~ 0.693N/T atoms decaying per unit time, thus the longer the half-life T the lower the radioactivity level that a given number of radioactive atoms produces, where 1/(lne2) ~ 1.44 which is the factor by which you must multiply the half-life to get the statistical mean life, defined as the time to zero activity if the initial straight-line asymptotic gradient of the decay curve, i.e. exp(-AT) ~ 1 - AT, were followed instead of the exponential curve which of course is itself just a mathematical idealization because it can never reach zero, despite the quantum reality check that in the real world some day the final radioactive atom will decay, and zero activity will be attained after a finite time]. At the same time, one may precipitate almost all the activity near the explosion [using shallow underground detonations produced by earth penetrator warheads, like Redwing-Seminole 13.7 kt shot surface burst inside a water tank at Eniwetok Atoll in 1956 to simulate shallow burial] so that distant localities would not be seriously affected. It is conceivable, therefore, that radiological warfare could be used in a humane manner. By exploding a weapon of this kind near an island one might be able to force evacuation without loss of life. No instrument, not even a weapon, is evil in itself. Everything depends on the way in which it is used."

- Edward Teller and Albert L. Latter, Our Nuclear Future: Facts, Dangers, and Opportunities (Criterion Books, New York, 1958), p. 136.

Above: Redwing-Seminole 13.7 kt shot inside a water tank at Eniwetok Atoll in 1956 to simulate shallow burial. The Wilson cloud shields much of the thermal radiation, while the enhanced cratering action deposits almost all of the radioactivity in the local fallout, seen here as the throwout from the crater. Teller and Latter explain how this kind of radiological warfare could make enemy forces evacuate an island like Iwo Jima (where the island had to be shelled with conventional weapons and flame-throwers, resulting in the death of 21,703 of the 22,786 Japanese soldiers, and the death of 6,825 allied soldiers) before receiving a lethal radiation dose, without any of the immoral carnage of shelling or other gross effects from conventional weapons. Another moral use of nuclear weapons that circumvents the carnage of conventional warfare is air bursts at altitudes just over the maximum fireball radius, to clear the conventional weapons defending coastal areas and beaches prior to an invasion such as the D-day landings: neutron-induced activity covers only a small area and the dose rates are relatively low once the aluminium-28 has decayed with a half-life of only 2.3 minutes.

Sr-90 exaggerations

Teller and Latter also explain how the threat from strontium-90 is grossly exaggerated. Sr-90 is more important than the equally long-lived Cs-137 because Cs-137 like potassium resides in tissues whose cells are regularly renewed and thus is rapidly eliminated from the body, whereas a small fraction of Sr-90 ends up in the bones for life, creating a larger dose. (The I-131 problem and its countermeasures was discussed in detail earlier in the blog post linked here.) Once the fallout comes down, there is a brief spell of danger while the fallout particles are physically present on the leaves and stems of crops, but this can be washed off and wind and rain soon wash the fallout particles into the soil where root uptake is important for the soluble component of the fallout activity. In coral soil or limestone based soil there is an abundance of calcium (coral is calcium carbonate) so chemically similar strontium gets crowded out and diluted.

In most American soils, however, there is less calcium, so with an average natural strontium to calcium mass abundance of 1:100, there is only about 27 kg of soluble natural strontium per acre. Adult humans have a natural strontium to calcium mass ratio of just 1:1,400 and contain only 0.7 gram of natural strontium. Hence, strontium uptake via the food chain from soil to human beings is discriminated against (relative to calcium) by the factor 14. These figures allow the dilution of strontium-90 to be calculated. Each step of the food chain discriminates against strontium relative to calcium (see also pages 1521-9 of the U.S. Congressional Hearings The Nature of Radioactive Fallout and Its Effects on Man, May-June 1957, which states on page 1529: "100 metres [depth] of sea water has 370 grams of dissolved calcium per square foot compared to the average of 20 grams per square foot for the top 2.5 inches of soil which absorbs and holds the fallout radiostrontium"):

(1) Soil: 1 g of Sr for every 100 g of Ca (protection factor = 1)

(2) Plants: 1 g of Sr for every 140 g of Ca (protection factor = 140/100 = 1.4)

(3) Milk: 1 g of Sr for every 980 g of Ca (protection factor = 980/100 = 9.8 for root uptake of soluble Sr in soil by grass, or 980/140 = 7 for Sr ingestion by cattle from fresh fallout particles still adhering directly to the grass)

(4) Human: 1 g of Sr for every 1,400 g of Ca (protection factor of 1400/100 = 14 for fallout in the soil, or 1400/140 = 10 for fallout on plants which are ingested by cattle)

J. L. Kulp's report "Sr-90 in Man" published in Science, 8 February 1957, vol. 125, p. 219, showed that in 1955 the average diet for the human population of the United States contained 7 micro-microcuries of Sr-90 per gram of calcium. It also reported an average worldwide total body burden of 0.12 micro-microcuries per gram of skeletal calcium, and a concentration in young children 3-4 times higher (due to growing bones and thus greater calcium intake from drinking milk).

In 1996, half a century after the nuclear detonations, data on cancers from the Hiroshima and Nagasaki survivors was published by D. A. Pierce et al. of the Radiation Effects Research Foundation, RERF (Radiation Research vol. 146 pp. 1-27; Science vol. 272, pp. 632-3) for 86,572 survivors, of whom 60% had received bomb doses of over 5 mSv (or 500 millirem in old units) suffering 4,741 cancers of which only 420 were due to radiation, consisting of 85 leukemias and 335 solid cancers.

‘Today we have a population of 2,383 [radium dial painter] cases for whom we have reliable body content measurements. . . . All 64 bone sarcoma [cancer] cases occurred in the 264 cases with more than 10 Gy [1,000 rads], while no sarcomas appeared in the 2,119 radium cases with less than 10 Gy.’

- Dr Robert Rowland, Director of the Center for Human Radiobiology, Bone Sarcoma in Humans Induced by Radium: A Threshold Response?, Proceedings of the 27th Annual Meeting, European Society for Radiation Biology, Radioprotection colloquies, Vol. 32CI (1997), pp. 331-8.

Zbigniew Jaworowski, 'Radiation Risk and Ethics: Health Hazards, Prevention Costs, and Radiophobia', Physics Today, April 2000, pp. 89-90:

‘... it is important to note that, given the effects of a few seconds of irradiation at Hiroshima and Nagasaki in 1945, a threshold near 200 mSv may be expected for leukemia and some solid tumors. [Sources: UNSCEAR, Sources and Effects of Ionizing Radiation, New York, 1994; W. F. Heidenreich, et al., Radiat. Environ. Biophys., vol. 36 (1999), p. 205; and B. L. Cohen, Radiat. Res., vol. 149 (1998), p. 525.] For a protracted lifetime natural exposure, a threshold may be set at a level of several thousand millisieverts for malignancies, of 10 grays for radium-226 in bones, and probably about 1.5-2.0 Gy for lung cancer after x-ray and gamma irradiation. [Sources: G. Jaikrishan, et al., Radiation Research, vol. 152 (1999), p. S149 (for natural exposure); R. D. Evans, Health Physics, vol. 27 (1974), p. 497 (for radium-226); H. H. Rossi and M. Zaider, Radiat. Environ. Biophys., vol. 36 (1997), p. 85 (for radiogenic lung cancer).] The hormetic effects, such as a decreased cancer incidence at low doses and increased longevity, may be used as a guide for estimating practical thresholds and for setting standards. ...

‘Though about a hundred of the million daily spontaneous DNA damages per cell remain unrepaired or misrepaired, apoptosis, differentiation, necrosis, cell cycle regulation, intercellular interactions, and the immune system remove about 99% of the altered cells. [Source: R. D. Stewart, Radiation Research, vol. 152 (1999), p. 101.] ...

‘[Due to the Chernobyl nuclear accident in 1986] as of 1998 (according to UNSCEAR), a total of 1,791 thyroid cancers in children had been registered. About 93% of the youngsters have a prospect of full recovery. [Source: C. R. Moir and R. L. Telander, Seminars in Pediatric Surgery, vol. 3 (1994), p. 182.] ... The highest average thyroid doses in children (177 mGy) were accumulated in the Gomel region of Belarus. The highest incidence of thyroid cancer (17.9 cases per 100,000 children) occurred there in 1995, which means that the rate had increased by a factor of about 25 since 1987.

‘This rate increase was probably a result of improved screening [not radiation!]. Even then, the incidence rate for occult thyroid cancers was still a thousand times lower than it was for occult thyroid cancers in nonexposed populations (in the US, for example, the rate is 13,000 per 100,000 persons, and in Finland it is 35,600 per 100,000 persons). Thus, given the prospect of improved diagnostics, there is an enormous potential for detecting yet more [fictitious] "excess" thyroid cancers. In a study in the US that was performed during the period of active screening in 1974-79, it was determined that the incidence rate of malignant and other thyroid nodules was greater by 21-fold than it had been in the pre-1974 period. [Source: Z. Jaworowski, 21st Century Science and Technology, vol. 11 (1998), issue 1, p. 14.]’

W. L. Chen, Y. C. Luan, M. C. Shieh, S. T. Chen, H. T. Kung, K. L. Soong, Y. C. Yeh, T. S. Chou, S. H. Mong, J. T. Wu, C. P. Sun, W. P. Deng, M. F. Wu, and M. L. Shen, ‘Is Chronic Radiation an Effective Prophylaxis Against Cancer?’, published in the Journal of American Physicians and Surgeons, Vol. 9, No. 1, Spring 2004, page 6, available in PDF format here:

‘An extraordinary incident occurred 20 years ago in Taiwan. Recycled steel, accidentally contaminated with cobalt-60 ([low dose rate, gamma radiation emitter] half-life: 5.3 y), was formed into construction steel for more than 180 buildings, which 10,000 persons occupied for 9 to 20 years. They unknowingly received radiation doses that averaged 0.4 Sv, a collective dose of 4,000 person-Sv. Based on the observed seven cancer deaths, the cancer mortality rate for this population was assessed to be 3.5 per 100,000 person-years. Three children were born with congenital heart malformations, indicating a prevalence rate of 1.5 cases per 1,000 children under age 19.

‘The average spontaneous cancer death rate in the general population of Taiwan over these 20 years is 116 persons per 100,000 person-years. Based upon partial official statistics and hospital experience, the prevalence rate of congenital malformation is 23 cases per 1,000 children. Assuming the age and income distributions of these persons are the same as for the general population, it appears that significant beneficial health effects may be associated with this chronic radiation exposure. ...’

‘Professor Edward Lewis used data from four independent populations exposed to radiation to demonstrate that the incidence of leukemia was linearly related to the accumulated dose of radiation. ... Outspoken scientists, including Linus Pauling, used Lewis’s risk estimate to inform the public about the danger of nuclear fallout by estimating the number of leukemia deaths that would be caused by the test detonations. In May of 1957 Lewis’s analysis of the radiation-induced human leukemia data was published as a lead article in Science magazine. In June he presented it before the Joint Committee on Atomic Energy of the US Congress.’ – Abstract of thesis by Jennifer Caron, Edward Lewis and Radioactive Fallout: the Impact of Caltech Biologists Over Nuclear Weapons Testing in the 1950s and 60s, Caltech, January 2003.

Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), discrediting the pseudo-science from geneticist Edward Lewis on pages 61, and 78-79:

‘... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls. …

‘What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10-6 like the others in the table. But the value of 2 x 10-6 was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year. ...

‘All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis’s work as others have done, “It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation” [Leading article in Science, vol. 125, p. 963, 1957]. This is just wishful journalese.

‘The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different.’

Our Nuclear Future: Facts, Dangers, and Opportunities, by Edward Teller and Albert L. Latter (Criterion Books, New York, 1958):

Page 167:

'If we continue to consume [fossil] fuel at an increasing rate, however, it appears probable that the carbon dioxide content of the atmosphere will become high enough to raise the average temperature of the earth by a few degrees. If this were to happen, the ice caps would melt and the general level of the oceans would rise. Coastal cities like New York and Seattle might be innundated. Thus the industrial revolution using ordinary chemical fuel could be forced to end ... However, it might still be possible to use nuclear fuel.'

Page 147:

'All the energy in that Nevada explosion was not quite sufficient to evaporate the water droplets in a cloud one mile broad, one mile wide, and one mile deep. This is not a very big rain cloud. ... Nuclear explosions are violent enough. But compared to the forces of nature - compared even with the daily release of energy from not particularly stormy weather - all our bombs are puny.'

Above: Dr Zaius in Planet of the Apes simultaneously held religious and scientific positions, leading him to suppress scientific findings which contradicted the religious dogma. You know, like my suppression by Britain's Open University physics department chairman, Professor Russell Stannard, author of books like Science and the Renewal of Belief:
"offering fresh insight into original sin, the trials experienced by Galileo, the problem of pain, the possibility of miracles, the evidence for the resurrection, the credibility of incarnation, and the power of steadfast prayer. By introducing simple analogies, Stannard clears up misunderstandings that have muddied the connections between science and religion, and suggests contributions that the pursuit of physical science can make to theology",

arguing that science should be alloyed with dogma again as a "unification" of physics and religion, as it was in the time of Galileo.
Actually, this makes some sense when you recognise that Stannard takes "physics" to include the religious belief in uncheckable pseudoscience: a landscape of 10500 different universes to account for the vast number of possible particle physics theories which can be generated by the 100 or more moduli for the shape of the unobservably small compactification of 6-dimensions assumed to exist in the speculative Calabi-Yau manifold of string theory, as well as other rubbish like Aspect's alleged "experimental evidence" on entanglement via correlation of particle spins:

"In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment." -

"The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics." - Thomas Love, California State University.

As a physics student with a mechanism for gravity that predicted correctly the cosmological acceleration two years ahead of its discovery, Russell didn't even personally reply but just passed my paper to Dr Bob Lambourne who in 1996 wrote to me that my prediction for quantum gravity and cosmological acceleration was not important because it is not within the metaphysical, non-falsifiable domain of Professor Edward Witten's stringy speculations on 11-dimensional 'M-theory'. In 1986, Professor Russell was awarded the Templeton Project Trust Award for ‘significant contributions to the field of spiritual values; in particular for contributions to greater understanding of science and religion’. So who says the Planet of the Apes story is completely fictional, aside from a little hairiness?

Above: Nova (Linda Harrison) portrayed in 3978 AD, in the 1968 movie Planet of the Apes. A nuclear war destroys 'civilization' leaving beautiful dumb girls like Nova. However, the film is politically correct and adds mutant aggressive apes to earth's survivors to make sure that the nuclear war 'survivors will envy the dead' (as Nikita Khrushchev claimed, quoted in Pravda, 20 July 1963), just as politically correct dogma requires.

Above: another view; maybe the alleged evidence for health benefits like enhanced lifespan and lower cancer rates from low level residual radiation in Hiroshima and Nagasaki contribute to her very healthy appearance?

‘Planet of the Apes’ started out as a Pierre Boulle novel in which a couple discover a bottle containing the story of how humans become dictatorial, slovenly and lazy by using apes as slaves to do their work, until there is a rebellion and an ape revolution reverses the situation. Humans are too cowardly to fight back and submit to the chains of oppression. Apes become the masters of human slaves. The twist at the end of the novel occurs when Boulle reveals that the story in a bottle has not been found by humans but rather by a couple of apes (who have read it with astonishment and dismiss the story just as a silly hoax).

The film, however, is another story and is based on a film script by ‘Twilight Zone’ master Rod Serling and Michael Wilson, and in some ways is a reversal of the underlying politics of Boulle's book (producer Arthur P. Jacobs contacted Pierre Boulle and asked him to take a look at the script; Boulle responded on April 29, 1965 that "he truly did not like the Statue of Liberty ending, feeling that it cheapened the story as a whole, and served as the 'temptation from the Devil'...") Instead of the disaster coming through the pacifist humans refusing to fight against oppression, it instead occurs (in the film) as a result of humans fighting one another with nuclear weapons and destroying the cities of human civilization, giving the apes in jungles the opportunity to take over the planet. However, some parts of Pierre Boulle's original plot are resurrected in the sequels to the 1968 film, where the mechanism by which the apes take over the planet is the use of ape slaves who rebel.

The first film, in the script by Rod Serling, starts with three astronauts taking an 18-month (ship time) journey supposed to cover a distance of 320 light years in 2,000 earth years, at a velocity of 320/2000 = 0.16c. At 16% of light velocity, ship time travels at just [1 – 0.162]1/2 = 0.987 of the rate of earth time, so the ship time passing would be 1974 years, not the 18 months that is claimed in the film. Deep sleep cubicles in the ship are used to keep the astronauts alive with the use of minimal resources during the journey. Serling changed the twist that Boulle used by having the ship hit an asteroid half way into the trip, cracking the plastic cubicle of the female astronaut and causing her to prematurely age and die in her sleep. This causes the computer to automatically abort the mission and turn the ship back towards the earth, which in the screenplay by Serling is discovered when the computer tapes are read later (this episode was omitted from the film). The ship, returning to a grossly altered earth with no surviving runways, crash lands in a lake.

The astronauts discover that on this planet the apes rule dumb, ignorant humans. In the final scene, the twist revealing that the planet is actually the earth (which should have been pretty obvious from the similar gravity, atmosphere, sun, moon, star positions in the sky, and so forth) is done by showing the Statue of Liberty half buried by beach sand. A nuclear war has apparently occurred during the 2,000 years that elapsed. The second film in the series, Beneath the Planet of the Apes, furthers this theme by having the surviving astronaut Taylor (Charlton Heston, appropriately nicknamed ‘Charlie Hero’ off-set by the Chimpanzee actor Roddy McDowell) and beautiful savage girl Nova discover an underground colony of surviving radiation-mutated humans worshipping a cobalt-cased ‘alpha-omega doomsday bomb’. Sublime political message: ‘the survivors in a nuclear war will have to live for thousands of years underground and will be mutants that envy the dead.’ Not exactly the truth about the harmlessness of slowly-decaying (i.e. low dose rate) cobalt fallout (which can simply be swept up and buried long before anyone gets a dangerous dose) compared to the survivable but more dangerous fast-decaying (i.e. high dose rate) fission products:

'Everybody's going to make it if there are enough shovels to go around...Dig a hole, cover it with a couple of doors and then throw three feet of dirt on top. It's the dirt that does it.'

- Thomas K. Jones, Deputy Under Secretary of Defense for Strategic and Theater Nuclear Forces, Research and Engineering, LA Times 16 January 1982.

The apes follow them underground and, after his girlfriend Nova is killed in the fighting, the bitter, love-cheated Charlie Hero decides to destroy the planet in anger, finally succeeding by falling on to the doomsday button which ends the story, just as in Pierre Boulle’s previous film Bridge on the River Kwai the crazy hero falls on the detonator switch when shot, blowing up the bridge. Fortunately the alpha-omega bomb - presumably because it's capable of destroying the whole planet - is the one bomb made which doen’t have a permissive action link and require authority codes and dual key activation to arm, with the key holes too far apart for one person to simultaneously turn both together. After all, you don't want to make such a dangerous bomb very hard to accidentally set off, do you, at least not if you're using it as the ending to a fine film?

This fictional tale, in lieu of the full facts on nuclear weapons effects, helped to cement the myth in popular culture that nuclear weapons are a danger to human civilization, rather than deterring world war.

Fraction of activity in local fallout

One of the interesting things about this 1958 book by Teller and Latter is that it gives details of how the atmospheric Nevada testing tried to minimise local fallout. E.g., on page 98, they claim that if the test is on a 'tower so tall that the fireball cannot touch the surface ... the amount of close-in fallout is reduced from eighty per cent to approximately five per cent.'

However, this figure is misleading! The actual percentage of the gamma activity in local fallout from 30 Nevada tower bursts at heights exceeding 100Wkt1/3 feet (it did not decrease at heights above that, due to the contribution to local fallout from the condensed iron oxides produced by the fireball enveloping the tower material) was 20% of that of a surface burst, not 5%.

This 20% figure comes from Jack C. Greene, et al., Response to DCPA Questions on Fallout, Prepared by the Subcommittee on Fallout, Advisory Committee on Civil Defense of the U.S. National Academy of Sciences, U.S. Defense Civil Preparedness Agency, DCPA Research Report No. 20, November 1973. This report was written by a committee composed of top experts on fallout such as Dr Carl F. Miller who had collected the fallout at Castle and Plumbbob and developed the fallout model used by DCPA, and Dr R. Robert Rapp of RAND Corporation who had analyzed the effect of the toroidal distribution of activity in the mushroom clouds of Bravo and Zuni upon the fallout pattern.

The proportion of activity in local fallout depends on which nuclides you are considering, so it is a different number for gamma and beta activity and for different times after burst. If you quote the percentage of unfractionated activities (like Zr-95) in local fallout, that is much larger than the percentage of the fractionated I-131, Cs-137, Sr-89 and Sr-90 in local fallout. Most of the fractionated nuclide decay chains have somewhat different volatilities, so they fractionate to different degrees. Therefore, there is no natural way to define what is meant by the fraction of activity that comes down in local fallout. One artificial way to define it is to consider the local fallout fraction as the gamma exposure rate normalized to 1 hour after burst an integrated over the area of the local fallout pattern. This includes fractionation to the extent that it reduces the average gamma exposure rate at the reference time of one hour after burst.

On page 3 they note that the radiation level at a fixed time after burst from a unit mass of fallout per unit area increases as the particle size decreases, e.g. the radiation level for a given deposition density at a fixed time after burst actually increases as you move further downwind from ground zero:

‘This observation is consistent with the consensus that radiochemical fractionation causes this ration to decrease with increasing particle size.’

In other words, the value of the ratio (R/hr at 1 hour)/(fission kiloton/square mile) is smaller for highly fractionated close-in fallout (which is depleted in volatile fission products) than it is for the unfractionated and enriched fallout deposited at great distances:

‘This problem has been customarily circumvented by using what amounts to an average of this ratio over the region of “local” fallout, where “local” was defined at the convenience of the author.’

They denote the average “local” fallout (R/hr at 1 hour)/(fission kiloton/square mile) ratio as K1, while the unfractionated fission product value is K0, so K1/K0 = fraction of activity in local fallout.

K1 is reduced by 25% due to instrument response to multidirectional gamma rays from fallout when calibrated using point sources. The batteries in the instrument partly shield the detector from gamma rays coming from certain directions, and the partial shielding of the instrument by the body of the person holding the instrument is also important for fallout measurements. It is also reduced by about 25% due to terrain shielding of direct gamma rays from fallout that collects in small hollows (microrelief) on the ground. Hence, the actual measured ratio, K2 = 0.77x0.75K1 = 0.56K1.

‘Local fallout’ has been defined in three different ways by different people, causing confusion over how to average K1. One way is to define local fallout according as fallout larger than a particular fallout grain size, another way is to define it as radiation levels greater than a particular dose rate at a given time after detonation, and a third way is to define it as the fallout deposited within a certain period of time, such as 24 hours after detonation.

Page 4 states that the best surface burst data is for 0.5 kt Johnie Boy (1170), 1.5 kt Buffalo-2 (980), 3.53 Mt Zuni (1150), 5.01 Mt Tewa (920), and 1.2 kt Sugar (1215), giving a mean of 1090 for K2 and 1930 for K1.

P. 8 states that the average K2 for 30 Nevada steel tower tests with tower heights (scaled by cube-root of yield to 1 kt) of 100 ft or more (due to the steel of the tower the fallout did not diminish below this value) is 220 (R/hr at 1 hour)/(fission kiloton/square mile), while for 40 air bursts at similar scaled altitudes, the mean is K2 = 25 (R/hr at 1 hour)/(fission kiloton/square mile).

Hence, high tower shots produce 100*220/1090 = 20% of the local fallout gamma dose rates of surface bursts, while free air bursts at heights above the fireball radius produced only 100*25/1090 = 2.3% of the fallout of surface bursts.

The Trinity result of K2 = 690 for 37Wkt1/3 feet steel tower burst is 100*690/1090 = 63% of the fallout of a surface burst and is equivalent to a 1 Mt detonation on a 30 storey steel framed building.

On p. 13, after investigating the local fallout fractions from Pacific surface bursts on coral islands, reefs and on the ocean water surface, they concluded that the type of surface did not have a substantial effect on the measured amount of local fallout produced by nuclear surface bursts.

On p. 17, after observing that iodine in fallout is highly fractionated since volatile and condenses late in the fireball history on to the surfaces of the remaining small particles (i.e., it is depleted from the local close-in fallout), they explain that the Japanese fishermen exposed to Bravo fallout on 1 March 1954 just north of Rongelap Atoll were found to have 7 times as much external gamma radiation exposure as thyroid iodine exposure.

In the July 1962 104 kt Sedan test in Nevada, a man who was exposed in the open to the base surge without any protection received a thyroid gland dose due only slightly higher than his external gamma exposure. Three air samplers determined that no more than 10% of the iodine in the Sedan fallout was present as a vapour during the cloud passage; i.e., 90% or more of the iodine was fixed in the silicate Sedan fallout and was unable to evaporate from the fallout particles to give a soluble vapour.

P. 19: ‘There is evidence that much if not all heavy fallout observed during atmospheric nuclear tests was visible as individual particles falling and striking objects, or as deposits ... the forehead will feel like sandpaper to the touch of the hand. The gritty sensation will also be felt on the hands and on bared arms. ... Probably you do not have a radiation-measuring instrument (if you do you can work outside until the instrument reads 0.5 R/hr), but heavy fallout can still be detected by one of these several clues: Seeing fallout particles, fine, soil-coloured, some fused, bouncing upon or hitting a solid object, particularly visible on shining surfaces such as the hood or top of a car or truck. ... Feeling particles striking the nose or forehead ... In the rain, after turning on the windshield wiper of your car, seeing fallout particles in raindrops slide downward on the glass and pile up at the edge of the wiper stroke, like dust or snow.’

P. 20: ‘Typical specific activities of fallout particles are 5 x 1014 fissions/gram of fallout; thus for each R/hr at 1 hour exposure rate produced, 5 milligrams of particles would be deposited per sq ft of area.’ For a minimal sickness gamma dose of 150 R over a week outdoor, 50 R/hr at 1 hour would be needed, requiring 0.25 gram per square foot of fallout to be deposited at 1 hour, which is readily visible on surfaces.

P. 27: Dr Timothy Fohl and A. D. Ealay of Mt. Auburn Research Associates (MARA) used a buoyant vortex fireball in their 1972 report Vortex Ring Model of Single and Multiple Cloud Rise, DNA-2945F, to model to simulate the effect of two simultaneous 13.5 Mt nuclear surface bursts. If they are detonated within 5 fireball diameters of each other, they merge while rising into a single cloud which reaches only 66% of the altitude reached by an individual detonation.

Going back to the Teller and Latter book, their figure of 5% for high tower shots roughly applies to the fractionated I-131, Cs-137, Sr-90 and Sr-89 in local fallout, rather than to the mixture of unfractionated and fractionated activities which give rise to the total gamma radiation field from local fallout. On page 99 they state:

'In the case ... where the fireball almost touches the ground, the close-in fallout is also only about five percent [actually, as we saw above, for 40 free air bursts where the fireball did not touch the ground, it was only 2.3% of the fallout gamma activity of surface bursts]. This is a somewhat surprising fact since in this case photographs show large quantities of surface material being sucked up into the cloud, just as they are in a true surface explosion.

'This material certainly consists of large, heavy dirt particles which subsequently fall out of the cloud. Yet most of them somehow fail to come in contact with the radioactive fission products.

'This peculiar phenomenon can be understood by looking at the details of how the fireball rises. At first the central part of the fireball is much hotter than the outer part and thus it rises more rapidly. As it rises, however, it cools and falls back around the outer part, creating in this way a doughnut-shaped structure. The whole process is analogous to the formation of an ordinary smoke ring.

'In most of the photographs one sees, the doughnut is obscured by the cloud of water that forms, but sometimes when the weather is particularly dry, it becomes perfectly visible. During the rather orderly circulation of air through the hole, the bomb debris and the dirt that has been sucked up remain separated.'

Above: toroidal circulation in the 1953 Climax test: dust passes up through the middle of the toroid without mixing with the ring shaped fireball, then it cools as it hits cold air at the top, causing it to cascade back around the outside of the fireball. Result: harmless, non-radioactive fallout of dust which has never come into contact with the radioactive toroidal shaped fireball (a ring doughnut shape with a hollow in the middle.

Above: toroidal fireball in the 1953 Grable nuclear air burst.

Above: photos taken at 17, 27 and about 50 seconds after the French nuclear test Licorne (a 914 kt balloon suspended shot, at 500 m altitude on 3 July 1970). The fireball thermal radiation is initially shielded by the expanding Wilson condensation cloud, which forms in humid atmosphere the low pressure, cooling air in the negative pressure blast phase (some distance behind behind the ever expanding compressed shock front). Edward Teller and Albert Latter clearly describe the scientific phenomena of the white 'skirt' surrounding the mushroom stem for bursts in humid air, on page 84 of their 1958 book Our Nuclear Future:

'It is actually a cloud: a collection of droplets of water too small to turn into rain but big enough to reflect the white light of the sun. ... The white skirts (which are not always present) do not consist of any material that is falling out of the cloud. On the contrary, a moist layer of air is sucked up into the cloud from the side and the droplets which form in this layer give rise to a cloud-sheet with the appearance of a skirt.'

Above: the lethal global fallout fallacy started with the 1949 book by David Bradley, No Place to Hide, which grossly exaggerated the Crossroads-BAKER fallout.

The effects of small doses of plutonium were falsely claimed to be harmful using metaphysical linear extrapolation from high dose radium effects, in lieu of actual data for low doses. When eventually in the 1970s and 1980s the detailed dosimetry for thousands of early radium dial painters was done (by exhuming the corpses and actually measuring the radium in the bones), in was discovered that alpha radiation effects internally were a threshold effect requiring a minimum of 1,000 rads or 10 Gy, so the linear dose-effects theory was bunk:

‘Today we have a population of 2,383 [radium dial painter] cases for whom we have reliable body content measurements. . . . All 64 bone sarcoma [cancer] cases occurred in the 264 cases with more than 10 Gy [1,000 rads], while no sarcomas appeared in the 2,119 radium cases with less than 10 Gy.’

- Dr Robert Rowland, Director of the Center for Human Radiobiology, Bone Sarcoma in Humans Induced by Radium: A Threshold Response?, Proceedings of the 27th Annual Meeting, European Society for Radiation Biology, Radioprotection colloquies, Vol. 32CI (1997), pp. 331-8.

DCPA Attack Environment Manual -

Sunday, August 09, 2009

Blast Wave

Above: Figure 2-23 on p. 2-59 of Dolan's Capabilities of Nuclear Weapons, DNA-EM-1, 1972, showing the rapid decay of the peak overpressure with increasing distance from a 1 kt nuclear surface burst:

R (feet) - P (psi)

25 - 300,000
40 - 60,000
70 - 10,000
150 - 1,000
400 - 70
1,000 - 10
20,000 - 0.1

The curve, based on Brode's theoretical calculations with programs that include both hydrodynamic motion and radiation flow, can be represented by the simple equation:

P (psi) = (1.7 x 1010 /R3.4) + (7.0 x 106 /R2) + (1,700 /R),

where R is distance in feet. The R3.4 fall in pressure at the smallest distances differs from the simple theoretical R3 prediction for the fall in overpressure due to dispersal of energy over the increasing mass of engulfed ambient air (this mass is proportional to R3) because the shock front is losing energy by radiating thermal radiation at the highest overpressures, which causes an additional fall in peak overpressure with distance. Scaling to other explosion yields is done by multiplying the distances by the cube-root of the total kiloton yield.

Dolan gives also a free air burst curve in Figure 2-2 on p. 2-7, which can be obtained by scaling the surface burst peak overpressure curve to a yield of about 0.565 kt, implying that surface bursts have an effective yield (due to reflection of blast wave energy into a hemispherical region) of 1.77 times the free air burst yield. Hence, the distance for any given pressure in a surface burst extends about 1.771/3 = 1.21 times as far as in a free air burst in sea level air. For a perfectly rigid surface, an effective yield increase factor of 2 would be expected since the same amount of blast energy for any radius would be concentrated in a hemisphere with only half the volume of the sphere for that distance. A reflection factor of 1.77 therefore implies that only 100(1 - 1.77/2) = 11.5% of the blast energy in a surface burst is permanently absorbed by the ground in the cratering, ground shock, and soil heating (fallout formation) processes. If the initial blast energy is 50% of the total yield in a free air burst, then in a surface burst it will be reduced to 44%. A discussion of blast theory and some test data is given in an earlier post linked here.


The history of the precursor is discussed in earlier blog post about Glasstone and Dolan. The billowing of thermally-raised smoke and dust in the blast wave of the TRINITY test (100 feet over dark desert soil) in 1945 should have been the suggested a modification of the blast by dust loading of the air in that region, but the first film of the precursor shock wave was obtained on the DOG shot of TUMBLER-SNAPPER in the Nevada in 1952. Dark coloured (brown) desert sand, consisting of crystals of silica, was exploded or 'popcorned' into hot dust by thermal radiation exposures of 11-19 cal/cm2 for yields of 35 kt to 1.4 Mt; a similar effect on lighter coloured (grey-white) coral sand required 15-27 cal/cm2. This formed a cloud of hot dust-laden air several metres high over the ground, which caused the blast wave to speed up and change in characteristics. The density of the dust added to the air increased the blast wind or dynamic pressure (which is directly proportional to the density), while the added momentum increased the duration of the blast winds, greatly increasing damage to structures and vehicles by the 'sandstorm effect' of the air-blasted dust cloud. The peak overpressure is somewhat reduced by the upward refraction of energy due to the temperature-height profile in the precursor region.

In 1953, the precursor effect was demonstrated by a comparison of damage from the ENCORE and GRABLE shots. The second test was at lower altitude so the thermal radiation was able to popcorn the desert effectively, creating far greater dynamic pressure effects than ENCORE at the same overpressures for drag effects on jeeps, trucks, and other dynamic-pressure sensitive targets. At subsequent tests in Nevada, selected areas around ground zero were flooded to form shallow lakes, while other areas were coated with asphalt, concrete, grass and other surfaces to investigate precursor development as a function of the reflective and physical nature of the surface. Precursors were noted at higher overpressures over coral sand, including surface bursts of over 30 kt yield (so that the fireball at thermal maximum is high enough to irradiate the ground with sufficient thermal energy to cause popcorning). Dolan's Capabilities of Nuclear Weapons, DNA-EM-1, 1972, p. 2-81, states that dust blast precursors will occur over dark city asphalt for burst altitudes below 800W1/3 feet, for W kilotons total yield, and for bursts over dark desert sand precursors will occur for burst altitudes below 650W1/3 feet. These formulae are valid for yields of 1-50 kt where observations are available (for other yields consideration must be given as to whether there is sufficient thermal exposure in the time before blast arrival for a dust layer to be produced).

Above: some typical qualitative precursor blast waveforms for overpressure and dynamic pressure, taken from Dolan's DNA-EM-1, 1972, which on pages 2-81 to 2-89 includes a detailed predictive system to indicate the shape of the precursor waveforms as a function of yield, height of burst and distance from ground zero. This was later developed into a quantitative precursor waveform prediction system in the late 1990s. At very high overpressures, the blast arrival is so soon after that detonation that very little of the thermal radiation has been emitted by the fireball, so there has been little development of a precursor in the available time. Therefore, the precursor develops gradually as the shock travels outward into areas which have been irradiated for longer times after burst, where enough thermal radiation has been emitted to cause a hot dust layer ahead of the shock wave. At long distances, the blast wave runs out of the dust layer because it encounters a region where the thermal radiation exposure has simply not been strong enough to 'popcorn' the sand or to 'smoke' the asphalt or grass. When this happens, the precursor encounters cooler air which makes it slow down, allowing the main blast wave (still travelling through air warmed by the precursor) to catch up and merge with the precursor, forming an ideal shaped blast wave once again.

Friday, August 07, 2009

Thermal radiation pulse shape, thermal yield and transmission, and Russian nuclear weapons test effects on animals

(For a full discussion of these updates to EM-1, see the updated earlier post linked here.)

Above: Fig. 12 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). According to page 39 of the report, these are the air burst thermal yields radiated up to a time of 10 times the time of the final thermal maximum (10t2nd max.) as a function of weapon yield and burst altitude: “A general downward trend is noted with increasing yield.” This reduction of thermal yield fraction with increasing total weapon yield is the opposite of Harold L. Brode’s theoretical emission equation in his 1968 Annual Review of Nuclear Science article “Review of Nuclear Weapons Effects”. Brode’s incorrect conclusion (in simple radiative cooling models that ignore convective cooling and the engulfment of cold air) that the thermal yield fraction increases with increasing total yield, seemed to be justified by the simple fact that the concentration of nitrogen dioxide in the shock front that shields thermal emission from the hot fireball is dependent on overpressure.

This suggests that the distance of any given amount of nitrogen dioxide shielding should scale as the cube-root of the total yield, whereas the fireball radius at final thermal maximum scales as the two-fifths power of the total yield.

Consequently, as the total yield increases, a there should be a reduction in the shielding by the nitrogen dioxide, which as the total yield increases, extends to an ever smaller fraction of the fireball radius at second maximum. The fireball radius at second thermal maximum increases relative to the nitrogen dioxide shielding layer radius as the total yield increases, thus increasing the thermal yield fraction emitted up to that time as a function of total yield, because of the reduced shielding by nitrogen dioxide at higher yields.

However, this argument is only applicable during the period that nitrogen dioxide shock wave shielding of fireball core emission is important, i.e. only up to the final thermal maximum power, by which time about 20-30% of the thermal radiation is emitted. Since 70-80% of the thermal radiation is emitted after the time of the final thermal maximum power, the nitrogen dioxide shielding effect is not important in the late stages. Brode’s 1960s calculations of thermal radiation emission from the fireball omitted the effect of fireball cooling by engulfing cold air (in an air burst) and soil (in a surface burst) from the environment. Due to the inertia of air, these convection cooling effects take time to come into play and so are relatively more important in the case of megaton yields (which emit significant thermal radiation over a long period of many seconds) than kiloton yields, where most of the thermal radiation is radiated within a second, before efficient convection cooling starts. Hence, for higher yield nuclear weapons, convection cooling by the entrainment of cold air and (in the case of a surface burst) soil, quickly cools the fireball after the time of thermal maximum and reduces the fraction of the total yield emitted as thermal radiation in an air burst. The 10t2nd max. thermal yield fraction for a sea-level air density free air burst falls from 35.0% at 1 kt to 34.1% at 10 kt, 33.0% at 100 kt, 29.1% at 1 Mt and to 25.4% at 10 Mt (source: DNA-TR-84-388, AD-A176959, 1984, Table 6, page 42).

In a surface burst, the thermal yield trend as a function of total yield is the opposite to that in a free air burst, because the crater ejecta throw-out shields thermal radiation emission from the fireball more effectively at low yields than at high yields. The radius for any given degree of thermal radiation shielding by crater ejecta scales as the cube-root of yield at sub-kiloton total yields and typically as the quarter-power of total yield for the megaton yield range; thus it is always scaling as a weaker function of total yield than the fireball radius at final thermal maximum, which scales as the two-fifths power of yield. Hence, more of the fireball thermal radiation gets shielded by crater ejecta throw-out in low yield surface bursts than in high yield surface bursts. This makes the thermal yield fraction in a surface burst increase from 4.5% at 1 kt to 6.6% at 10 kt, 13% at 100 kt, 16% at 1 Mt, and 17% at 10 Mt (source: DNA-TR-84-388, AD-A176959, 1984, Table 6, page 42).

Above: Fig. 13 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). According to page 39 of the report, these are the air burst thermal yields radiated up to a time of 10 times the time of the final thermal maximum (10t2nd max.) as a function of weapon yield and burst altitude: “A general downward trend is noted with increasing yield.”

Above: Table 6 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). According to page 39 of the report, these are the thermal yields radiated up to a time of 10 times the time of the final thermal maximum (10t2nd max.). Notice that, as we have explained physically, the sea-level air burst thermal yield fraction decreases with increasing total yield because more and more of the cooling is done by convection mixing processes rather than by radiation in the longer thermal pulse of higher yields, while in a surface burst the thermal yield fraction increases with increasing total yield, because the crater ejecta throw-out radii which absorb much thermal radiation in a surface burst scale less rapidly (i.e., as the cube or fourth root) with total yield than does the fireball radius at final thermal peak power (i.e., the two-fifths power of yield).

Above: Fig. 14 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). This diagram shows the effect of burst altitude from sea level to 30 km upon the thermal pulse curve shape for a 100 kt air burst. The report notes that a surface burst thermal power curve is not identical to a sea level air burst, but on account of the extra opacity of the fireball due to the earth incorporated from the crater process, the surface burst thermal curve has a much smaller final thermal maximum radiating power. The surface burst fireball also takes a slightly longer time to reach the final peak thermal emission, than an equivalent yield sea level air density air burst.

Above: Fig. 15 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). This diagram is a linear version of the logarithmic plots in Fig. 14, showing how the shape of the standard thermal pulse curve depends on burst altitude for air bursts of 100 kt total yield.

Above: Fig. 21 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). This diagram shows how the different wavelengths in the thermal radiation spectrum, from 0.32 micron ultraviolet to 1.87 micron infrared, are transmitted through a standard Nevada desert atmosphere for an air burst 1 km above ground (or a megaton range surface burst fireball with a radius of over 2 km, so that the mean height of the radiating surface is 1 km above ground) with ground level taken to 1.28 km above sea level. The data come from Kaman Science Corporation’s TRAX Monte Carlo simulation code for atmospheric transmission. Notice that the 0.32 micron curve for ultraviolet shows rapid attenuation due to absorption by natural ozone in the atmosphere, and the 1.87 micron infrared curve shows absorption by water vapour and carbon dioxide; but the shape of the transmission curves for ultraviolet and infrared are totally different (each departs from a straight line exponential attenuation law by curving in a different direction from a straight line, so that the average would be close to a straight line and thus a simple exponential attenuation law). Because the transmission fraction is a logarithmic plot while distance is linear plot, a straight-line transmission on this graph represents exponential attenuation and a curve represents a departure from exponential attenuation. The data for the wavelengths between the extremes, i.e. 0.55, 0.94, and 1.23 microns, all show much less attenuation as they are closer to (or within) the visible radiation band. The 0.55-micron curve shows a transmission of 70% to a horizontal range of 30 km. If this is treated as an exponential absorption with the typical Nevada desert visibility range of 80 km, then the Nevada nuclear test data thermal radiation transmission, T = e-R/V = e-30/80 = 0.69, is similar to the 0.55-micron wavelength transmission predictions. However, this simplified approach (used in the 1960s by Gibbons) would not be justified because it would properly take account of the effect of atmospheric water vapour of air near sea level on the infrared radiation transmission.

Above: Fig. 39 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). Transmission for a surface burst and a 1 km altitude air burst (or a high yield surface burst where the mean height of the hemispherical fireball radiating surface is 1 km high) for sandy soil ground, 300 m base altitude cloud cover, and 25 km atmospheric visibility (1.5 g/m3 of sea level water vapour concentration). This report proves that the effect of the fireball radiating temperature on changing the source spectra of the thermal radiation as a function of weapon yield and for ground interaction is negligible in comparison to the effect of the height of the fireball. The thermal transmission as a function of distance is similar for different yields if the effective fireball height above the ground is the same. It is also similar for a surface burst and a sea level air burst (although obviously the thermal yield will be different in each case) if the mean height of the fireball is the same. However, varying the height of the centre of the radiating surface of the fireball causes a large change in the thermal transmission curve, mainly as a result of the variation in the water vapour content of the air as a function of height. The cooler air at higher altitudes contains less water vapour and therefore allows more transmission of infrared radiation than sea level air.

Above: Fig. 40 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). Thermal transmission from a 100 kt surface burst with the fireball at ground level for a hypothetical dark, zero albedo ground, i.e. a totally radiation absorbing, non-reflective ground which does not reflect any of the thermal radiation, for no cloud cover (curve 1) and cloud cover with its base at altitudes of 300 m (curve 2), 1,500 m (curve 3) and 3,000 m (curve 4), with in each case 25 km atmospheric visibility (1.5 g/m3 of sea level water vapour concentration). Curve 1 therefore presents the case where the transmission is purely a function of the air characteristics, without any ground or cloud reflection effects.

Above: Fig. 41 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). Thermal transmission from a 100 kt surface burst with the fireball at ground level for a sandy soil, for no cloud cover (curve 1) and cloud cover with its base at altitudes of 300 m (curve 2), 1,500 m (curve 3) and 3,000 m (curve 4), with in each case 25 km atmospheric visibility (1.5 g/m3 of sea level water vapour concentration). Curve 1 therefore presents the case where the transmission is purely a function of the air characteristics and ground reflection, with no cloud reflection effects. We have added curve 5 (which is curve 1 from Fig. 40 already given, for a non-reflecting ground and no cloud cover) to show the small effect of the ground reflection on transmission. It is clear that when both ground reflection and cloud reflection occur, the surfaces act like a waveguide for thermal radiation energy, whose transmission is enhanced by “channelling” of thermal energy.

Above: Fig. 43 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). Thermal transmission for a sandy soil, for no cloud cover (curve 1) and cloud cover with its base at altitudes of 1,500 m (curves 2), 1,500 m (curves 3) and 3,000 m (curves 3), with in each case 25 km atmospheric visibility (1.5 g/m3 of sea level water vapour concentration). High level clouds above the nuclear explosion can enhance thermal transmission, by reflecting back to the ground some of the thermal radiation that would otherwise be lost to space. But when a nuclear explosion occurs in or above a cloud layer, or above a smoke screen, the opposite effect occurs and the thermal radiation is shielded and attenuated to a considerable extent prior to reaching a target. During Pacific nuclear tests of air and high altitude bursts in 1958 and 1962, cloud cover over ground zero was either required as a condition for firing, or alternatively was provided artificially by smoke screen generators in order to prevent any risk of injury to the dark coloured terns. Similarly, for the very high altitude tests in 1962 where the fireballs would be above the horizon as viewed from the Hawaiian islands 1,300 km always, firing was only authorized when there was low-level local cloud cover over the Hawaiian islands to protect the public from any risk of retinal injury. Nevada tests in 1955 over smoke screens demonstrated the value of smoke clouds in attenuating thermal radiation from nuclear weapons. The 110 kt 1954 CASTLE-KOON test at Bikini Atoll was detonated in a rainstorm with very low visibility, and thermal radiation effects were undetectable at the measuring stations.

Above: Fig. 44 from John R. Keith and Anthony F. Portare, An Analysis of Army Thermal Transmissivity Calculations, Kaman Sciences Corp., Arlington, VA., report DNA-TR-84-388, AD-A176959 (1984). Thermal transmission in 6.5 km atmospheric visibility (10 g/m3 of sea level water vapour concentration) for a 10 kt surface burst with 300 m base cloud cover and three different ground surface reflections: zero reflection, dirt (sandy soil), and snow. Comparisons of these curves to those of the previous figure prove that the ground reflection characteristics are much less important in determining thermal radiation transmission than the atmospheric visibility, the fireball altitude, and the cloud cover situation. The curve for dirt (sandy soil) with 6.5 km visibility due to 10 g/m3 of sea level water vapour concentration in the air and cloud cover with its base at 300 m represents the mean transmission to be expected for thermal radiation in the U.K. and other areas of Northwest Europe, as shown by statistical data in DNA-TR-84-388, AD-A176959 (1984).

Above: Ernest Bauer's August 1990 Institute for Defense Analyses report, Physics of High-Temperature Air. Part. 2. Applications, ADA229778, contains a useful section summarising a little of the available nuclear testing data on the mass of fallout as a function of burst altitude for surface bursts, free air bursts, and tower burst nuclear weapons tests, as well as the family of computed curves above showing the transition from a single thermal pulse for a 1 Mt air burst at 50 km altitude to a double-pulse for a 1 Mt sea level air burst. The main reason for the transition is the weakening of the shock wave due to the lower air density at higher altitudes: the lower air density at high altitudes simply allows the X-rays (which comprise 75% of the primary energy emission from a typical 1 ton mass, 1 megaton yield detonation) to travel much larger distances before being absorbed by the air.

This means that same amount of energy is spread over a larger volume of air in a high altitude burst, so the energy density (energy per unit volume) in the fireball is lower than it is for the tiny initial X-ray fireball at sea level, and this lower energy density produces a smaller temperature rise, and thus a weaker blast wave. This weaker blast wave at high altitudes is unable to compress air to a high enough density to form the concentrations of nitrogen dioxide that shield thermal radiation after shock formation in a sea level detonation. The nitrogen dioxide formed in the shock wave from compressed hot air absorbs the thermal radiation from the fireball core in a sea level detonation, causing the minimum and thus the two pulses, but nitrogen dioxide is not formed in a high altitude burst because the shock wave is not strong enough to produce it, hence the thermal minimum gradually disappears as the burst height is increased, merging the two pulses together into a single pulse for a 1 Mt detonation at 50 km altitude.

Another interesting report now online (23 MB PDF) is Dan H. Holland, et al., Physics of High-Altitude Nuclear Burst Effects, Mission Research Corp., Santa Barbara, CA., ADA068541, December 1977:

'This compendium presents a reasonably thorough summary of the physics and chemistry that is particularly relevant to the prediction of effects of high-altitude nuclear bursts on radar, optical, infrared, and communication systems. The various chapters have been written by experts on the particular subjects. Most of the presentations are on a fairly advanced level, but a serious attempt has been made to keep in mind the special needs of new workers in this field. It is assumed that the reader has a thorough general background in physics.'

A 57 MB March 2008 report by V. A. Logachev and L. A. Mikhalikhina, Animal Effects from Soviet Atmospheric Nuclear Tests, ITT Corp., Alexandria, VA., report ADA485845, is available online in PDF download format. There is another version available in more compressed PDF format here. The Soviet Union exposed 8,000 animals (sheep, horses, cattle, camels, etc.) in various structures, vehicles, and in the open and shadowed positions, to nuclear explosions in order to assess the effects in different situations, and to different combinations of the various effects of nuclear detonations. Instead of simply giving the straightforward data on effects from specific nuclear tests, the data is presented only as processed output having been combined into three categories of yield range. However, it is still an important report. Table 11 on page 27 gives the following burns energies comparison for bare skin, light summer clothing and heavy winter clothing (as with most Russian nuclear weapons research, they seem to make every effort to cause confusion and ambiguity in the simplest presentations; here it is not stated for which yields the data for burns under clothing apply to):

Above: American data for thermal energy needed for burns under clothing, from page 6.2b of the 1960 (change 2 pages revision) Capabilities of Atomic Weapons, TM 23-200, Confidential. It is interesting to compare this data to the Russian test results.

Update: some new thermal data from nuclear weapons tests is available:

Dr Abraham Broido, et al., “Operation Tumbler-Snapper, Project 8.3, Thermal Radiation from a Nuclear Detonation”, USNRDL, weapon test report WT-543, Secret – Security Information, March 1953, p. 3: “The data reported here indicate a decrease in thermal efficiency with increasing weapon yield, ranging from about 44 per cent at 1 kt to about 34 per cent at 30 kt.”

Teapot basic thermal measurements report (linked here).

Upshot Knothole report on smoke screen protection against thermal radiation (linked here).

Operation Redwing vital basic thermal measurements (report linked here).

Sunday, August 02, 2009

Glasstone and Dolan nuclear crater sizes exaggeration

ABOVE: The 1956 Australian-British Maralinga cratering nuclear surface burst, Buffalo-2, prepared for firing at Marcoo site.  The middle of the weapon is carefully aligned to the height of the ground surface.  Notice however that the weapon, despite having a low yield of just 1.4 kt, is a massive implosion device with a diameter of 5 feet.  The heavy weight of the implosion system and carrying cradle added mass for the "case shock" of the weapon increasing the energy carried by that dense case shock, as opposed to the percentage of energy released from the weapon as the X-ray emission; the altered partition of energy between bomb case kinetic energy and X-ray fireball energy dramatically increase the cratering and ground shock effects.

X-rays dispersed outside of the bomb have only a trivial effect on the ground, ablating a thin surface layer of the ground and heating up the air, contributing mostly to air blast, not direct ground shock or cratering (it does produce the "air slap" ground shock, but this is rapidly dissipated with depth into the ground).  The downward portion of the dense case shock, on the other hand, embeds itself deeply into the ground, and is the major source of cratering and direct ground shock, coupling about half of the case shock kinetic energy into the surface and producing essentially all of the cratering and close-in ground shock energy.

Air-slap from the blast produces on a trivial effect on the crater and ground shock because of its relatively low density (the transfer of energy from an air shock wave into the ground is trivial because of the mismatch of acoustic impedance in the two media, due to the fact that the ground is much denser than the air shock wave even at the greatest overpressures near ground zero). Heavy weapons with a relatively small yield to mass ratio are thus far more effective at cratering than modern lightweight designs.  (This bomb design effect on cratering was proved by H. L. Brode and R. L. Bjork in their 1960 RAND Corporation report RM-2600, Cratering from a Megaton Surface Burst, but was never even mentioned in Glasstone's Effects of Nuclear Weapons.)

Above: Google satellite photograph of Runit Island in Eniwetok Atoll, showing two nuclear weapon test craters. The 105 m diameter, 11 m deep crater from the 18 kt Hardtack-Cactus 6 May 1958 surface burst nuclear test crater on Runit Island was used as a convenient nuclear waste dump during the decontamination of the Atoll in 1979, and was topped with a concrete dome, which is visible in the Google satellite photograph. The 120 m diameter, 17 m deep water-filled crater in the reef seen in the photo above, just to the North-East of Runit Island, was formed by the 40 kt Redwing-Lacrosse nuclear test 17 feet over the reef on 5 May 1956.

The 15 megaton Castle-Bravo test of 1 March 1954 and a later smaller test produced the two large overlapping craters shown below in the reef near Namu Island to the North-West of Bikini Atoll:

Above: the world’s first nuclear explosion-created freshwater lake, Lake Chagan. It was produced on 15 January 1965 at the edge of the Semipalatinsk Test Site in Kazakhstan using a 140 kt (96% fusion, 4% fission) thermonuclear weapon, detonated 178 m underground in saturated siltstone (12% water), employing a 6 kt fission primary stage. About 80% of the radioactivity was trapped underground and only 20% escaped into the atmosphere. The crater is 408 m in diameter and 100 m deep. The dose rate on the crater lip at 30 years after detonation was reported as 2.6 mR/hr, i.e. about 260 times the Earth’s average natural background radiation level of 0.010 mR/hr, with the lake water in the crater containing just 300 pCi/litre. On the 10 October 1965, they detonated a 1.1 kt nuclear bomb at 48 m depth in weak siltstone rock under the dry clay bed of the Sary-Uzen stream. The crater produced was initially 107 m in diameter and 31 m deep, but when flooded it slumped to 20 m depth and 124 m diameter. Some 96.5% of the fission products were trapped underground, and the crater lip had a dose rate of only about 2.5 R/hr at 5 days after detonation, decaying to 0.050 mR/hr (including natural background) at 30 years later. (Data source: Milo D. Nordyke, The Soviet Program for Peaceful Uses of Nuclear Explosions, Lawrence Livermore National Lab., UCRL-ID-124410, July 1996, pp. 13-15.)

There are three reports now available online which throw light on the replacement to the Glasstone and Dolan Effects of Nuclear Weapons and its secret supplement Capabilities of Nuclear Weapons, Effects Manual EM-1:

1. Kenneth E. Gould, A Guide to Nuclear Weapons Phenomena and Effects Literature, Kaman Tempo, Santa Barbara, CA., Technical Report ADB094426, DASIAC Special Report DASIAC-SR-206, 31 October 1984 which usefully states on page 5:

'Capabilities of Nuclear Weapons, DNA EM-1 (Reference C-2), is the best single comprehensive reference on all aspects of nuclear weapon phenomena and effects. This classified two-volume set both complements and supplements The Effects of Nuclear Weapons. Volume 1 focuses on nuclear weapon phenomenology and Volume 2 covers nuclear weapon effects that are primarily of military interest. This major DNA handbook, often referred to by its report number "EM-1," was last published in 1972 (with minor revisions through 1981), but it is presently being completely revised. When updated, EM-1 will again serve its important role as a basic source document for the preparation of nuclear operational and employment manuals by the military services.'

2. John R. Murphy, et al., Nuclear Effects Data Management and Analysis System (NEDMAS), DSWA-TR-96-94, Defense Special Weapons Agency, 1997, which gives some details of the new computer database for the effects of tests, and

3. Ernest Bauer, Variabilities in the Natural and Nuclear Endoatmospheric Environment, Institute for Defense Analyses, Virginia, IDA Document D-1085, April 1992.

Appendix A of this third report consists of a document by A. A. Fredrickson called Revision of DNA Nuclear Crater Specifications, taken from the September 1991 issue of Nuclear Survivability:

DNA [Defense Nuclear Agency, which has since evolved into the DTRA] has recently completed an "end-to-end" cratering validation program that resulted in dramatic reduction of the crater size thought to result from the surface detonation of modem strategic weapons. Although a major field exploration and several underground nuclear tests conducted in this program occupied the spotlight, numerical simulations were in many ways more central to DNA's success. This article recounts the integrated role of the numerical simulations, re-interpretation of existing nuclear data, and additional field events in the evolution of DNA's view on nuclear cratering.

DNA developed a crater specification methodology for its 1972 Capabilities of Nuclear Weapons - Effects Manual Number 1 (EM-1) with the acknowledgment that the nuclear database was incomplete and probably inappropriate for application to strategic yield surface burst weapons. The cratering events conducted at the Nevada Test Site (NTS) employed low yield sources suspected to produce larger craters than modern weapons of strategic interest. Data from the several high yield cratering events conducted at the Pacific Proving Grounds (PPG) were considered flawed by the atoll reef geology that was highly dissimilar to sites of interest. The 1972 EM-1 methodology was an attempt to reconcile these shortcomings.

The strategic source surface burst crater specifications were based on high yield PPG data, calibrated to sites of interest by comparison of low yield nuclear and high explosive craters in various geologies. Figure A.1 depicts 1 Megaton crater profiles for two geology types as specified in 1972 EM-1. ...

The numerical simulations indicated that strategic yield sources would produce craters one-third to one-fifth the scaled size produced in these events due to the relative inefficiency of the x-ray coupling process relative to hydrodynamic coupling. ...

Today, DNA relies on numerical cratering and ground shock simulations as key integral parts of its experimental program. They are the basis for cratering specifications for near-surface bursts in EM-1, 1991. Figure A.5 compares 1991 EM-I craters on two geology types to the profiles perceived in 1972. This dramatic shift in perception is based on the compelling evidence obtained in the highly successful field program discussed in this article. The current DNA reliance on numerical simulations is a result of the recognition that they provided the motivation for this program, enabled the success of the field activities, and today provide the means to apply this test experience to specific strategic weapon and geology combinations of interest. ...

In the earlier blog post (recently updated) on Glasstone and Dolan, I pointed out that, along with most effects, the crater predictions given by Glasstone and Dolan are massive exaggerations for high yields. There is a transition from cube-root explosive crater dimensions scaling at yields up to a few kilotons, to fourth-root gravity scaling in the megaton range. The cube-root scaling law occurs because the energy needed to heat, shock and explosively disrupt air or soil is directly proportional to the mass of that air or soil. Thus, the volume of ambient air or soil subjected to a particular shock overpressure scales in directly proportion to the energy of the explosion; which means that the radius of that volume scales in proportion to the cube-root of the explosion energy.

For high yield surface bursts, however, there is another vitally important use of energy in order to excavate a big crater: the energy needed to do work against gravity in order to raise the mass of dirt from the hole and dump it outside to form the 'lip' and 'ejecta' region around the crater (afterwind lofted fallout dust is merely ~1% of the crater mass). This energy is simply E = mgh where m is the cratered mass raised average height h against gravitational acceleration g. Taking this into account means that for high yields h and particularly m both become very large, so that the gravitational work energy needed to form a massive crater is immense and can then exceed that of the explosive break-up of the soil. (For low yields, the gravitational work energy is trivial compared to that used to break up the soil explosively, because of the smaller mass and smaller crater depth.) If the crater diameter to depth ratio is constant, then the cratered mass m is proportional to the cube of the depth h, or m = bh3 where b is a constant, so E = mgh = bh3gh = bh4g. Rearranging, h ~ E1/4 for high yields. There are other factors involved of course, because the amount of energy used for cratering is essentially the downward-directed case-shock energy of the bomb debris (the air blast doesn't have enough density and thus enough momentum to dig out the crater, it just causes some compression). This is a limited fraction of the explosion energy, so the energy utilization in explosively heating, compressing and breaking up the crater material limits the energy available for ejecting it against gravity. This energy balance will usually be accompanied by some change in the ratio of crater diameter to depth, so the craters will not always exactly behave the fourth-power scaling law in the megaton range. Nevertheless, there is a massive exaggeration of high-yield crater sizes in the Effects of Nuclear Weapons 1977 and related documents.

The earliest mainstream American statement made that crater dimensions theoretically scale as W1/4 for the regime in which gravity is important is by Dr Milo D. Nordyke of the Lawrence Radiation Laboratory in Cratering Experience with Chemical and Nuclear Explosives (published in Proceedings of the Third Plowshare Symposium, Engineering with Nuclear Explosives, April 21, 22, 23, 1964, U. S. Atomic Energy Commission report TID-7695, TID-4500 (UC-35), pages 51-53. Nordyke states there on page 52: "... the analysis that leads to W1/3 ignores the action of several factors such as gravity and the strength or internal frictional forces of the medium. ... one can show that their effect would be to lower the exponent and lead toward W1/4 scaling (reference: L. I. Sedov, Similarity and Dimensional Methods in Mechanics, Gostekhizdat Press, Moscow, 1954 and Academic Press, New York, 1959, page 251)."

However, Nordyke did not identify the W1/4 as applying to the gravity regime for large megaton range explosions that excavating large masses of ejecta over large vertical distances against gravity, and the W1/3 law holds for relatively small craters in the sub-kiloton range. Instead, he argued for an interim value of the exponent between the values of 1/3 and 1/4, of about W1/3.4 ~ W0.3 from empirical data for Nevada desert alluvium. This fudge factor leads to inaccurate extrapolations from the Nevada test data.

Further reading:

‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies ... coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance ... Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’

– L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.

‘It is shown that the primary cause of cratering for such an explosion is not “airslap,” as previously suggested, but rather the direct action of the energetic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as [X-ray] radiation in the earliest phases of the explosion.’

- H. L. Brode and R. L. Bjork, Cratering from a Megaton Surface Burst, RAND Corp., RM-2600, 1960.

D. E. Burton, et al., Blast induced subsidence in the craters of nuclear tests over coral, Lawrence Livermore National Lab., UCRL-91639, 1985:
“The craters from high-yield nuclear tests at the Pacific Proving Grounds are very broad and shallow in comparison with the bowl-shaped craters formed in continental rock at the Nevada Test Site and elsewhere. Attempts to account for the differences quantitatively have been generally unsatisfactory. We have for the first time successfully modeled the Koa Event, a representative coral-atoll test. On the basis of plausible assumptions about the geology and about the constitutive relations for coral, we have shown that the size and shape of the Koa crater can be accounted for by subsidence and liquefaction phenomena. If future studies confirm these assumptions, it will mean that some scaling formulas based on data from the Pacific will have to be revised to avoid overestimating weapons effects in continental geology.”

Another source of information on the revision of crater dimensions is pages 136-139 of the 1993 book by Bruce G. Blair, The Logic of Accidental Nuclear War, published by the Brookings Institution, online here:

“Recently the U.S. Department of Defense reviewed the pertinent historical evidence gathered during nuclear tests and developed new models of the vulnerability of underground structures to nuclear explosions. These calculations differed substantially from those derived from earlier models. For example, the dimensions of a crater produced by a nuclear explosion were estimated to be considerably smaller than previously thought. To give a specific comparison, the radius of a crater produced by a one-megaton nuclear explosion on the surface of wet soil would be 651 feet according to the old formula, whereas the new formula estimated the radius to be 394 feet. ... Comparable differentials typically hold across the spectrum of weapon yields and soil varieties. ...

“... Under the new formula the pertinent calculations for this location’s geological composition (dry soft rock, according to U.S. analysts) indicate a crater radius of only 180 feet for a one-megaton weapon, or 262 feet for a nine-megaton weapon.”

Notice that an increase in crater radius from 180 to 262 feet for a yield increase from 1 to 9 megatons implies a crater scaling power law of W0.171, i.e. 180(9 Mt / 1 Mt)0.171 = 262 feet.

Holsapple’s online crater scaling program

Keith A. Holsapple has a very nice and useful online crater scaling program at which allows you to predict crater sizes for comet and asteroid impacts, chemical explosives, and also any yield and burst conditions from two types of nuclear weapon design:

(1) “high weight/energy ratio” inefficient low yield devices typical of early kiloton Nevada tests with a mass to yield ratio of 10 kg/kt, and

(2) “low weight/energy ratio” efficient high yield modern thermonuclear weapons with a mass to yield ratio of 0.42 kg/kt).

He also includes TNT, which of course has a massive weight to yield ratio of 1,000,000 kg/kt by definition. The dense explosion debris embeds itself deeply into the ground, delivering energy deeply into the ground far more efficiently than X-rays which just ablate a surface layer and cause a rapidly attenuated ground shock by the recoil from ablation. Efficient (high energy/weight ratio) nuclear weapons release most of their energy initially as X-rays not the kinetic energy of the dense debris shock wave, so they do not couple much energy deeply into the ground to cause a crater. As a result, high yield modern nuclear weapons use a much smaller proportion of their energy for cratering than inefficient low yield weapons or TNT chemical explosive.

Holsapple gives a sketchy account of the theory in his paper Theory and equations for
“Craters from Impacts and Explosions”
online here: which for nuclear craters cites:

K. A. Holsapple and S. Peyton, The Scaling of Nuclear Weapons Effects for Near Surface Bursts, Defense Nuclear Agency report DNA 6543F (1987), and

R. M. Schmidt, K. R. Housen and K. A. Holsapple, Gravity Effects in Cratering, Defense Nuclear Agency report DNA-TR-86-182 (1988).

The approach Holsapple used is the scaling of experimental data through the use of dimensional analysis where the data is allowed to determine the scaling law at high yields, rather than fitting the data to determine coupling constants in a purely physical, mechanistic scaling model based on energy utilization. His equations 6 and 7 show that at low energy yields, the crater size is a stronger function of yield than at higher yields. The low yield limit is called the “strength regime” and crater sizes scale approximately as the cube-root of yield in this regime; at higher yields the scaling is in the “gravity regime” which is a weaker function of yield, closer to the fourth-root. Holsapple in equation 7 gives a “general form with those limits and that interpolates between these two regimes”.

One problem is that he gives no derivation of the interpolative formula 7, and another problem is that the limits for the strength regime and gravity regime are not fixed theoretically as cube root and fourth root scaling in his formula, but are defined by experimental data. This is similar to modelling thermal radiation through the atmosphere by simply modifying the inverse-square law to fit the data, e.g. taking the thermal radiation to fall as say 1/R2.5, instead of adding an exponential term to allow for absorption and scattering by the atmosphere in addition to the inverse square law.

The problem is that we know from physical energy utilization principles that the laws of nature are due to fixed mechanisms and should be fixed theoretically. Variations in the experimental data should be used not to vary the scaling laws provided by the physical mechanism, but instead to determine the physical constants. At low yields, a unit amount of cratering energy excavates a unit mass of soil or rock. If the soil or rock has fixed density, the crater volume at low yields is then directly proportional to the energy yield. By geometry, the radius of a hemisphere is proportional to the cube-root of its volume. If the crater radius-to-depth ratio is a constant, this cube-root law applies to other shapes, too.

Hence, at low yields the cratering theory needs to be tied to the cube-root. At high yields, the influence of gravity is that most of the available cratering energy must be used up in shifting soil or rock out of the massive crater. The energy needed to lift crater mass M up to vertical distance to the lip D against the force of gravity F = Mg is simply E = FD = MgD. This is just a physical fact of nature. Since for a constant crater radius-to-depth ratio, the mass M is proportional to D3, we get E ~ D3D ~ D4, hence at high yields the energy used to overcome gravity predominates. The full equation for the utilization of energy is:

E = AD3 + BD4,

where the first term on the right hand side (containing D3) is the energy needed to hydrodynamically excavate the mass of the crater (this energy is directly proportional to the mass of the crater or to the cube of the crater dimensions), and the second term on the right hand side (containing D4) is the energy needed to overcome gravity and dump the excavated soil on the surrounding ground to form the crater lip and ejecta zone. So this is the simplest way to model craters: just use the experimental data to determine the values of constants A and B. Instead of this energy utilization and physical mechanism based approach, Holsapple allows the experimental data to vary the values of the powers that in fact should not be allowed to vary unless the radius-to-depth ratio of the crater varies.

The crater scaling formula that Holsapple gives allows the excavated volume of the crater to vary linearly with bomb yield at low yields (“strength regime”), correctly giving W1/3 scaling for linear dimensions, but at high yields it gives a scaling law of W{mu}/2 where {mu} should theoretically be equal to ½ from the physical laws of nature as they are known for constant radius-to-depth ratio craters to give the W1/4 scaling law at high yields (gravity regime). By letting {mu} vary according to the explosive and impact cratering data in his database, however, Holsapple gets differing values for {mu}, from 0.41 for dry soil to 0.55 for wet soils, soft rock, hard soil and hard rock. For {mu} = 0.55, the scaling law for the high yield (gravity regime) asymptote will be W0.55/2 = W0.275 instead of the physically defensible W1/4 which is needed for energy conservation! Moreover, the data in Holsapple’s database are mainly data for the strength regime. He doesn’t have strong evidence of a departure from the natural W1/4 scaling law. So we disagree with the scaling procedure unless {mu} is taken as ½ and the experimental data is just used to constrain the values of the other variables in the scaling procedure. The dimensional analysis formula used to interpolate in the all-important transition zone (which spans the most important yield range for nuclear weapons) from strength to gravity scaling, is also suspect, and we would prefer a simple physical prediction system based on energy utilization between the hydrodynamic break up of the ground and the work against gravity in ejecting debris to form the lip and ejecta zone (as outlined in an earlier post, linked here).

However, online calculations using Holsapple's computer program does provide some interesting updates to our information. Holsapple's computer calculation for explosion cratering works by using an equation based on dimensional analysis and data to predict crater excavation volumes for four different kinds of soil: dry soil, wet soil, hard soil/soft rock, and hard rock. The crater excavation volume is constrained to be directly proportional to bomb yield for very low, sub-kiloton yields (the "strength regime", corresponding to cube-root scaling for linear dimensions such as depth and radius) but less than directly proportion to yield for very high yields such as in the megaton range (the "gravity regime", where we argue that the energy to excavate against gravity is E = mgh ~ [volume]*[volume]1/3 ~ [volume]4/3 so that [volume] ~ E3/4, which makes linear dimensions scale as E1/4).

For example, for TNT (not nuclear explosive) surface burst (with the charge half-buried in the ground, so that the centre of the explosion at at ground level), Holsapple's computer program shows that at TNT yields equal to or less than 100 kg, the excavated volume in dry soil is 39,570Wkt cubic metres. But if you increase the yield to 1 kt of TNT explosive, you don't get a crater of 39,570 cubic metres, but only 16,600, because gravity effects are already starting to kick in. This figure of course is bigger than for a nuclear explosion.

Holsapple's model shows that the crater excavation volume for a low weight-to-energy ratio nuclear weapon (0.4186Wkt kg bomb mass) is actually 10 times less than for a high weight-to-energy ratio nuclear weapon (10Wkt kg bomb mass) and is 25 times smaller than the crater volume produced by 1 kt of actual TNT (1,000,000Wkt kg bomb mass).

For example, the crater volume for a 1 kt low mass-to-energy ratio nuclear warhead surface burst on dry soil is 664 m3, compared to 6,640 for high mass-to-energy ratio nuclear warhead, and to 16,600 for actual TNT.

The heavier the bomb mass, the more efficient is the cratering effect, because more energy is carried downwards on dense, high-momentum bomb debris that embeds itself into the ground and delivers energy efficiently, unlike the X-ray surface ablation and the blast wave reflection from the ground, which produce a ground shock but no cratering. For the low mass-to-energy ratio nuclear warhead (which produces the significant smallest cratering action) on dry soil, Holsapple's program gives the following crater excavation volumes as a function of total yield:

1 kt: 664 m3
10 kt: 4,680 m3
100 kt: 32,200 m3
1 Mt: 220,000 m3
10 Mt: 1,490,000 m3

Once the volume is calculated by the semi-empirical dimensional scaling equation, the program uses that volume to find the other crater parameters. Holsapple states that the density of dry soil is 1.7 grams/cm3, but calculates the mass of crater ejecta as 80% of that (1.36 grams/cm3) because not all of the crater volume is formed by ejecting material: there is also a soil compression effect below the bomb, and this accounts for the other 20% of the crater mass (which is compressed downward instead of being ejected out of the crater). This 80% figure is applied to all types of soil.

The density of dry soil is 1.70 grams/cm3, but as stated only 80% of the crater volume is ejected so the mass of ejecta per unit volume is 0.8*1.70 = 1.36 grams/cm3. For wet soil, hard rock and soft rock, the density is 2.10 grams/cm3, and the ejected mass to volume ratio is 0.8*2.10 = 1.68 grams/cm3. For hard rock, the mass density is 3.20 grams/cm3, and the ejected mass to volume ratio is 0.8*3.20 = 2.56 grams/cm3.

In all cases of weapon type and soil type, Holsapple's model uses the following relationships to calculate crater dimensions from the crater excavation volume V:

Apparent crater radius, Ra = 1.10V1/3
Rim radius, Rrim = 1.30Ra
Apparent depth, Da = 0.60V1/3
Average lip height, Hlip = 0.17Da
Crater formation time, Tformation = 0.8V1/6/g1/2 where acceleration due to gravity g = 9.81 ms-2 for Earth. (Most of the crater volume is ejected within a couple of seconds for any nuclear explosion, since the time taken is a weak function of yield.)

Information on the distribution of crater ejecta velocities is also provided. For a low mass-to-energy ratio nuclear weapon surface burst on dry soil, 50% of the ejecta exceeds 13.5 m/s for 1 kt or 30.1 m/s for 1 Mt.

For extremely high yields, there is a transition to "complex" craters like lunar craters, having a wide shallow basin and a central peak. Complex craters have their shape because they are too large for all the excavated material to be dumped at the lip; the fallback of ejecta in the centre of the crater is then so substantial it causes a central peak in the middle of the crater.

On the Earth, crater exceeding a rim diameter exceeding 1.77 km for dry soil would start to transition into "complex" craters. Obviously, there is a difference in this between Earth and Moon because there are no significant afterwinds in an explosion caused by an impact event on the Moon: there is a lack of atmosphere. But on the Earth, a large explosion near surface level causes a toroidal fireball to form due to air drag on the sphere, and you then get debris being sucked into the toroidal circulation via a mushroom "stem" above ground zero. On the Moon, which has only one-sixth of the surface gravity of the Earth, the transition from simple to complex cratering occurs at a rim diameter of 8.5 km. These complex craters are more relevant to impact cratering like the K-T event 65.5 million years ago, than to the relatively trivial energy releases of stockpiled nuclear explosives.

Peaceful cratering: the U.S. Atomic Energy Commission’s Project PLOWSHARE

After the fallout from the CASTLE-BRAVO 14.8 megaton Bikini Atoll hydrogen bomb produced media hysteria over the effects of thermonuclear weapons, President Eisenhower responded by ordering the development and testing at Bikini Atoll of REDWING-NAVAJO of the 95 % clean 4.5 megaton hydrogen bomb. Aside from averting collateral damage from fallout in tactical nuclear war to defend Western Europe from invasion by the massive conventional Soviet block armies, this kind of cleaner weapon was also intended for peaceful civil engineering use as a cratering explosive. The first peaceful underground PLOWSHARE test in 1957 was such a success that it convinced America to move above ground nuclear testing underground to avoid fallout radiation hazards.

John Lindsay-Poland’s book Emperors in the jungle: the hidden history of the U.S. in Panama (2003) gives in Chapter 3, “The Nuclear Canal”, a critical but detailed example of another PLOWSHARE cratering project in the story of this peaceful use of the clean hydrogen bomb.

This plan was to set off 275 relatively clean hydrogen bombs to cheaply (at an estimated cost of roughly one billion dollars, which was cheap compared to the cost of using conventional explosives) create a new sea-level (without any locks) Panama canal in the Darien region, near the border with Colombia. The existing Panama canal has locks and isn’t at sea-level, so it takes a long time for ships to pass through it, making it slow and expensive for ships to use to cross from the Atlantic to the Pacific Ocean.

View Larger Map

Above: in order to proof-test nuclear cratering for a new Panama canal, the 30 % fission, 104 kt total yield PLOWSHARE-SEDAN nuclear test was detonated at 635 feet depth (to optimize cratering efficiency and minimize fallout by trapping the radioactive case shock debris in the crater ejecta) in the dry soil of the Nevada Test Site on July 6, 1962. Although SEDAN was a success, the soil around some of the proposed Panama canal routes was not dry soil but saturated clay, which can create a wider, shallower shaped crater than dry soil. Hence, to get the depth required, higher yields than SEDAN would have been required for a new Panama canal. This would have increased the distant blast wave refraction effects (downwind of the high altitude winds), the ground shock (earthquake-type) effect, and fallout (although it is easier to reduce fission yield at very large total yields than very small ones, for instance the 50 Mt Soviet test was only 2-3 % fission). Both the distant blast and fallout effects could have been averted by postponing detonation until the winds were blowing out to sea, but ground shock effects from the large number of simultaneous high yield underground nuclear detonations required might have damaged the nearby existing Panama Canal, depending on the exact route taken, the distance, and the distribution of bomb yields used. The project was finally cancelled in 1971 due to lying propaganda about alleged low-level radiation effects in the popular media.


Patteson, A. W, Physical Characteristics of Craters from Near-Surface Nuclear Detonations, report AD0360630, 1960.

Proceedings of the Third Plowshare Symposium Engineering with Nuclear Explosives Held in Davis California on April 21-23, 1964, University of California report ADA396463, 1964.