Thursday, March 30, 2006

Fires from nuclear explosions


Above: film of the Effects of Nuclear Weapons.

Hiroshima: it was not 'vaporised by 6,000 °C within an instant': all the wood-frame buildings burned down in a firestorm that developed 30 minutes later, as a result of the blast knocking over cooking braziers amid paper screens and bamboo furniture. (Hiroshima was attacked at breakfast time, Nagasaki at lunch time.) The modern buildings tended to survive better as shown, simply because brick and concrete are obviously not inflammable. They also gave better protection against radiation and blast. People live in both cities today. Fewer than 1% of victims died due to cancer caused by radiation. The maximum leukemia rate occurred in 1952 and ever since has been declining. There were no genetic effects above the normal rate in offspring of even highly irradiated survivors and cancer risks were carefully studied:

'The Life Span Study (LSS) population consists of about 120,000 persons who were selected on the basis of data from the 1950 Japanese National Census. This population includes ... atomic-bomb survivors living in Hiroshima or Nagasaki and nonexposed controls. ... all persons in the Master Sample who were located less than 2,500 meters from the hypocenter ATB were included in the LSS sample, with about 28,000 persons exposed at less than 2,000 meters serving as the core. Equal numbers of persons who had been located 2,500-9,999 meters from hypocenter ... were selected to match the core group by age and sex. ... As of 1995, more than 50% of LSS cohort members are still alive. As of the end of 1990, almost 38,000 deaths have occurred in this group, including about 8,000 cancer deaths among the 87,000 survivors. Approximately 430 of these cancer deaths are estimated to be attributable to radiation.'

Nuclear tests were later conducted in 1953 and 1955 to see if white-washing wooden houses would reflect the heat flash and prevent ignition: it worked! Various types of full scale houses were exposed on March 17, 1953 to 16-kiloton Annie and on May 5, 1955 to 29-kiloton Apple-2 in Nevada. The front of the houses was charred during the intense radiant heat flash, but none of them ignited even where the blast was severe enough to physically demolish the house. Fences and huts exposed at the 1953 test had to be covered in rubbish to ignite: see this film.

Even ignoring the problems of starting a fire in a modern building after the EMP has taken out mains electricity, firestorms are also impossible in most cities, judging by the criteria required for firestorms when the R.A.F. and U.S.A.F. tried very hard with thousands of tons of incendiaries over limited areas in World War II. Some experts were aware of this fact as early as 1979:

‘Some believe that firestorms in U.S. or Soviet cities are unlikely because the density of flammable materials (‘fuel loading’) is too low–the ignition of a firestorm is thought to require a fuel loading of at least 8 lbs/ft2 (Hamburg had 32), compared to fuel loading of 2 lbs/ft2 in a typical U.S. suburb and 5 lbs/ft2 in a neighborhood of two story brick row-houses, communications and electric power systems of the victim.’ – U.S. Congress, Office of Technology Assessment, The Effects of Nuclear War, May 1979, page 22.

See also the report by Drs. Kenneth A. Lucas, Jane M. Orient, Arthur Robinson, Howard MacCabee, Paul Morris, Gerald Looney, and Max Klinghoffer, ‘Efficacy of Bomb Shelters: With Lessons From the Hamburg Firestorm’, Southern Medical Journal, vol. 83 (1990), No. 7, pp. 812-20:

‘Others who have recently tried to develop criteria for the development of a firestorm state that the requisite fuel loading appears to be about four times the value of 8 lb/sq ft cited earlier. ... A standard Soviet civil defense textbook states: "Fires do not occur in zones of complete destruction [overpressure greater than 7 psi]; flames due to thermal radiation are prevented, because rubble is scattered and covers the burning structures. As a result the rubble only smolders."’

The then-‘secret’ May 1947 U.S. Strategic Bombing Survey report on Nagasaki states (v. 1, p. 10): ‘… the raid alarm was not given ... until 7 minutes after the atomic bomb had exploded ... less than 400 persons were in the tunnel shelters which had capacities totalling approximately 70,000.’ This situation, of most people watching lone B-29 bombers, led to the severe burns by radiation and flying debris injuries in Hiroshima and Nagasaki. The May 1947 U.S. Strategic Bombing Survey report on Hiroshima, pp 4-6:

‘Six persons who had been in reinforced-concrete buildings within 3,200 feet [975 m] of air zero stated that black cotton black-out curtains were ignited by flash heat... A large proportion of over 1,000 persons questioned was, however, in agreement that a great majority of the original fires were started by debris falling on kitchen charcoal fires... There had been practically no rain in the city for about 3 weeks. The velocity of the wind ... was not more than 5 miles [8 km] per hour….

‘The fire wind, which blew always toward the burning area, reached a maximum velocity of 30 to 40 miles [48-64 km] per hour 2 to 3 hours after the explosion ... Hundreds of fires were reported to have started in the centre of the city within 10 minutes after the explosion... almost no effort was made to fight this conflagration within the outer perimeter which finally encompassed 4.4 square miles [11 square km]. Most of the fire had burned itself out or had been extinguished on the fringe by early evening ... There were no automatic sprinkler systems in building...’

Dr Ashley Oughterson and Dr Shields Warren noted a fire risk in Medical Effects of the Atomic Bomb in Japan (McGraw-Hill, New York, 1956, p. 17):

‘Conditions in Hiroshima were ideal for a conflagration. Thousands of wooden dwellings and shops were crowded together along narrow streets and were filled with combustible material.’

Dr Harold L. Brode and others have investigated the physics of firestorms in commendable depth, see for example, this 1986 report, The Physics of Large Urban Fires: http://fermat.nap.edu/books/0309036925/html/73.html

That lists the history of firestorms, and gives some very interesting empirical and semiempirical formulae. It contains computer simulations of firestorms which show the way Hiroshima and Hamburg must have burned, and these simulations might well apply to forests in the fall (with little green leaves to shield the ground, and lots of dry leaves and branches on the ground to act as kindling, like the litter used to start fence and house fires in the 1953 Nevada nuclear tests). However, modern cities would tend to be left smouldering rubble. Dresden and Hamburg had medieval multistorey wooden buildings in the areas that burned, and people don't build cities like they used to!

The best equation in that 1986 report is Brode's formula for the thermal radiating power versus time curve of a low altitude nuclear detonation: P(t) ~ P(max).[(2t2)/(1 + t4)], where t is time (as measured in units of the time taken for the final peak thermal pulse to occur).

This useful equation adds to the vast collection of empirical formulae in his 50-odd pages long detailed mathematical analysis, 'Review of Nuclear Weapons Effects' in the Annual Review of Nuclear Science, v18, 1968, and many blast wave reports from the early 1980s. However, he has to quote the thermal time scaling law from Glasstone and Dolan, which is nonsense.

Glasstone and Dolan states that the time of peak thermal power from a low air burst is: 0.0417W(kt)0.44 seconds. This is a fiddle, based on computer calculations which use an imperfect knowledge of properties of hot air. Nuclear test data from all the American low altitude tests, 1945-62, shows that the empirical law is quite different: 0.036W(kt)0.48 seconds. In a later post I'll collect the most important empirical formulae together with the test data from which they are derived, and describe the controversy which resulted when Dolan took over editorship of Capabilities of Nuclear Weapons (and The Effects of Nuclear Weapons) and moved most nuclear effects predictions from being based on test data to being based on computer calculations of physics from first principles.

The reason for the false time of second thermal maximum must be to compensate for a change in the shape of the thermal curve for higher yields. Glasstone and Dolan show thermal curve for low yield weapons in which 20% of the thermal radiation is emitted by the time of final peak thermal power. Dolan's DNA-EM-1 however gives a computer simulation curve showing that about 30% is emitted by that time for a high yield weapon (also air burst in approximately sea level air). Brode in the Annual Review of Nuclear Science 1968 gives a formula which shows that the thermal yield increases from 40% of initial fireball energy to 44% as yield increases from say 1 kt to 10 Mt or so.

What physically happens is that the radius and time of final peak fireball radiation power scale as about W2/5 and W0.48, respectively. However the principal thermal minimum (before the final maximum brightness) is caused by nitrogen dioxide created in the shock front, and the range of this shielding and duration of this shielding both scale as W1/3. Hence, as the bomb yield increases, the shock caused nitrogen dioxide shield covers less of the fireball and covers a smaller proportion of the thermal radiation curve.

This means that the percentage of bomb which is radiated as thermal energy increases slightly and the fraction which is radiated by the time of the final maximum also increases. Instead of presenting a lot of thermal emission radiation curves for different yields, Glasstone and Dolan 1977 instead apparently used the standard 20 kt thermal power curve and changed the formula for the second maximum so that it was defined not as the true time for second maximum, but rather as the time by which 20% of the thermal energy was emitted, so that it was in reasonable agreement with the curves presented.

Many other fiddles exist in Glasstone & Dolan 1977. In January 1963, Dr Edward C. Freiling and Samuel C. Rainey of the U.S. Naval Radiological Defense Laboratory issued a 17 page draft report, Fractionation II: On Defining the Surface Density of Contamination, stating: "The section ‘Radiation Dose Over Contaminated Surfaces,’ in The Effects of Nuclear Weapons, is out of date with regard to the account it takes of fractionation effects. This report presents the technical basis for revising that section. It recommends that the exposure rate from fractionated debris be presented as the product of a contamination surface density with the sum of 3 terms. The 1st term is the exposure rate contribution of refractorily (unfractionated) behaving fission products. The 2nd term is for volatilely (fractionated) behaving fission product nuclides. The 3rd term expressed the contribution of the induced activities."

This criticism of The Effects of Nuclear Weapons was deleted in the final March 1963 version of the report (USNRDL-TR-631). However, criticism of Glasstone’s neglect of fractionation varying with distance in the fallout pattern continued with R. Robert Rapp of the RAND Corporation in 1966 authoring report RM-5164-PR, An Error in the Prediction of Fallout Radiation, and John W. Cane in 1967 authoring DASIAC Special Report 64, Fallout Phenomenology: Nuclear Weapons Effects Research Project at a Crossroads. (This was all ignored in the 1977 edition.)

Glasstone and Dolan state, for example, that water surface bursts only deposit 30% of their radioactivity as local fallout. This figure comes from inaccurate analysis during Operation Redwing in WT-1318 page 57, which says that water surface bursts Tewa and Flathead deposited 28% and 29% of their fallout activity locally (within areas of 43,500 and 11,000 square miles respectively). However, this is misleading as the same report says that the water surface burst Navajo deposited 50% of its fallout activity locally over 10,490 square miles, while it states that the land surface burst Zuni deposited 48% of its fallout activity locally over 13,400 square miles. On 9 July 1957, B. L. Tucker of the RAND Corporation showed, in his secret report Fraction of Redwing Activity in Local Fallout, that these percentages were based on a false conversion factor from dose rate to activity, and that by using the correct conversion factor, all these tests deposited about 68-85% for corrected Redwing data. Also, more accurate data from Operation Hardtack in 1958 shows that water surface bursts deposit similar local fallout to ground surface bursts, although water burst fallout is less fractionated.

During the 1960s, the consequences of fission product fractionation for discussions of the ‘percentage’ of radioactivity deposited as local fallout, and also for the radiation decay rate as a function of particle size and therefore of downwind distance, occurred in the Defence Atomic Support Agency of the U.S. Department of Defence. It was established that there is no single fixed percentage of radioactivity in early fallout, since the different fission products fractionate differently so the percentage depends on the nuclides being considered; if attempts are made to add up the total radioactivity, it is found that beta and gamma radioactivities of the different fission products differ, as does the average gamma ray energy of fallout fractionated to different degrees at the same time after detonation, so it is not scientific to give a single ‘average’ percentage of the total radioactivity in local fallout; considerations must be done for each fission product separately. For example, after the Hardtack-Oak surface burst 49% of Cs-137 and 89% of Mo-99 were deposited within 24 hours of burst; while Glasstone states that 60% of the activity is deposited within 24 hours. This makes the data on fallout effects in The Effects of Nuclear Weapons both misleading scientific explanation, and also generally inaccurate for making any sort of numerical calculation of fallout.

In a previous post, it was mentioned that the official U. S. manual in 1957 exaggerated the ignition radius for shredded dry newspaper for a 10-Mt air burst on a clear day by a factor of two, and that fire areas were therefore exaggerated by a factor of four (circular area being Pi times the square of radius).

It also exaggerated the range of blistered skin (second degree burns):

Dr Carl F. Miller, who worked for the U.S. Naval Radiological Defense Laboratory at nuclear tests, hit out in the February 1966 Scientist and Citizen: ‘Reliance on the Effects of Nuclear Weapons has its shortcomings... I was twenty miles from a detonation ... near ten megatons. The thermal flash did not produce the second-degree burn on the back of my neck, nor indeed any discomfort at all.’

The online NATO HANDBOOK ON THE MEDICAL ASPECTS OF NBC DEFENSIVE OPERATIONS, FM 8-9, 1996, calculates in Table 4-VI that second-degree skin burns even from 10 Mt air bursts (where the range is greater than from surface bursts) would only extend to a range of 14.5 km (9 miles) in a typical city with atmospheric visibility of 10 km.

There is plenty of evidence that the high mortality to thermal burns victims was due to combined thermal and nuclear radiation exposure, since the nuclear radiation doses to people in the open at thermal burns ranges were sufficient to depress the bone marrow which produces white blood cells. The maximum depression in the white blood cell count occurs a few weeks after exposure, by which time the thermal burns had generally become infected in the insanitary conditions the survivors had to make do with. This combination of depressed infection-fighting capability and infected burn wounds often proved lethal at Hiroshima and Nagasaki, but it is essential to note that apparently severe burns were more superficial than is made out by most propaganda, nobody was vaporized:

‘Persons exposed to nuclear explosions of low or intermediate yield may sustain very severe burns on their faces and hands or other exposed areas of the body as a result of the short pulse of directly absorbed thermal radiation. These burns may cause severe superficial damage similar to a third-degree burn, but the deeper layers of the skin may be uninjured. Such burns would heal rapidly [unless the person also receives a massive nuclear radiation dose], like mild second-degree burns.’ – Dr Samuel Glasstone and Philip J. Dolan, editors, The Effects of Nuclear Weapons, U.S. Department of Defence, 1977, p. 561.

The British Home Office Scientific Advisory Branch, whose 1950s and 1960s reports on civil defence aspects of nuclear weapons tests are available at the U.K. National Archives (references HO229, HO338, etc., also see DEFE16 files), immediately distrusted the American manual on several points. Physicists such as George R. Stanbury from the Scientific Advisory Branch had attended Operation Hurricane and other U.K. nuclear weapons tests to measure the effects!

However, when they published the British data in civil defence publications, they were quickly 'discredited' by physics academics using the American manual. The problem was that they could not reveal where their data came from, because of secrecy. This plagued civil defence science not only in Britain but also in America throughout the cold war. The public distrusted all but the most exaggerated 'facts', being led by propaganda from various sources (including the Soviet-funded lobbies such as the U.S.S.R.-controlled 'World Peace Council') that the only way to be safe was to surrender by unilateral nuclear disarmament. (Similarly, Japan was supposedly safe from nuclear attack in August 1945 because it had no nuclear weapons.)

Professor Freeman Dyson helpfully explained the paradox in his 1984 book Weapons and Hope: '[Civil defence measures] according to the orthodox doctrine of deterrence, are destabilizing insofar as they imply a serious intention to make a country invulnerable to attack.'

This political attitude meant simply that without civil defence, both sides need fewer weapons to deter the other side. Therefore by one way of looking at the logic, it is more sensible to have no civil defence, and this will allow both sides to agree to have a minimal number of weapons to deter the other side. This set in during the 1960s, and spread from passive civil defence to active defences like ABM treaties, where both the U.S.S.R. and the U.S. agreed to limit the number of ABM (anti ballistic missile) systems.

This was I believe signed by people like President Nixon, who were regarded as slightly cynical by some people. The problem with pure deterrence is that it doesn't help you if you have no civil defence and a terrorist attacks you, or there is a less than all-out war. (Even if you disarm your country of nuclear weapons entirely, you are obviously no more safe from a nuclear attack than Japan was in August 1945 when both Hiroshima and Nagasaki were blasted. So you still need civil defence, unless you start pretending - as many do - that people were magically vaporised in the hot but rapidly cooling nitrogen dioxide-coloured 'fireball' of hot air, and falsely claim from this lie that duck and cover would not have helped any burned, battered and lacerated survivors.)

In America, nuclear age civil defence had begun with the crazy-sounding but actually sensible 'duck and cover' advice of 1950, just after the first Russian nuclear test. Effects like shattered glass, bodily displacement, and thermal flash burns covered the largest areas in Hiroshima and Nagasaki, and were the easiest to protect against as nuclear test data shows. The higher the overpressure, the more horizontally the glass fragments go, so you can avoid lacerations by ducking, which also stops burns and bodily displacement by the wind drag or dynamic pressure. The last big civil defence expenditure was President Kennedy's fallout program of 1961. Kennedy equipped all public building basements throughout America with food, radiation meters and water. (Something like two million radiation meters had to be made.)

In Britain the 'Civil Defence Corps', which had been honored during the Blitz in World War II, was finally abolished in March 1968, after ridicule. (Civil defence handbook Number 10, dated 1963, was held up in parliament for public ridicule, mainly because of one sentence advising people driving cars to 'park alongside the kerb' if an explosion occurred. This was as regarded patronising and stupid by the British Government of 1968 - which was, of course, a different one from that of 1963 when the manual was published in response to public demand stemming from the Cuban missiles crisis.)

Recently Dr Lynn Eden has written a book with input from firestorm modeller Dr Harold Brode and various other physicists, Whole World on Fire: Organizations, Knowledge, and Nuclear Weapons Devastation (Ithaca, N.Y.: Cornell University Press, 2004), which makes the case that a technical U.S. Defence Intelligence Agency's secret publication, Physical Vulnerability Handbook - Nuclear Weapons (1954-92), never included any predictions of fire damage! (Some reviewers of that book have falsely claimed that fire risks were covered up, which is absurd seeing that the 1957 non-secret U.S. Department of Defense The Effects of Nuclear Weapons book falsely shows that dry shredded newspaper would be ignited 56 km from a 10 megaton air burst, which was reduced to half that, 28 km in the 1977 edition. Dr Eden misses this completely and tends to poke fun at civil defence in her presentation of the effects of the 1953 Encore nuclear test thermal ignition experiments in Nevada. We'll look at the facts in a later post.)

However, the scientific reference is not that physical vulnerability handbook, but is the U.S. Department of Defense's secret manual Capabilities of Nuclear Weapons, which does contain fire ignition predictions. The tests and the science will be examined in later posts.

Photo credit: Hiroshima photo in colour was taken by the United States Air Force.

Wednesday, March 29, 2006

Outward pressure times area is outward force...

Image above is taken from Dr Samuel Glasstone's Effects of Nuclear Weapons 1957. The outward force of the blast always has an equal and opposite reaction (3rd law of motion), in this case underpressure (suction), pulling instead of pushing. See the tree stand in the middle of this video clip of the 15 kiloton Grable nuclear test. Close to ground zero, before the suction phase develops, the reaction is simply the symmetry of the blast (the reaction of the Northwards part of the blast is the Southwards moving blast, while there is still high pressure connecting them - this breaks down when a vacuum forms near ground zero, and from then on the reactive force is the inward or suction blast phase). Likewise, in a sound wave, you have to have an outward pressure followed by an inward (underpressure) force. The relationship between force and pressure is force equals pressure times area acted upon.

This whole approach to understanding sound waves, explosion blast waves, and consequently the big bang gravity mechanism, is suppressed. The logic that you get an inward force in an explosion (which by Newton's 3rd law balances the outward force) is also inherent in the implosion principle of nuclear weapons, as Glasstone explained:




If you don't have an equal and opposite reaction in a pressure wave, it isn't a sound wave.

The force you get against your eardrum isn't just a push, but a push followed by equal pull.

This mechanism explains the gauge boson inward push in the big bang, predicting gravity.

The outward force in any explosion always has an equal and opposite reaction (Newton's 3rd empirical law). If you just push air, the energy disperses without propagating as a 340 m/s oscillatory sound wave. Air must be oscillated to create sound. It delivers an oscillatory force, outward and then inward. Merely using wave equations does not explain the physical process, even where the maths happens to give a good fit to data. Sound waves are particulate molecules deep down, carrying an oscillatory force.

This makes various predictions and contains no speculation whatsoever, it is a fact based mechanism, employing Feynman's mechanism as exhibited in the Feynman diagrams - virtual photon exchange causing forces in QFT. He noted that path integrals has a deeper underlying simplicity:

"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities." - Richard P. Feynman, Character of Physical Law, Penguin, 1992, pp 57-8.

(In the same book he discusses the problems with the LeSage gravity mechanism as per 1964.)

'Clean' nuclear weapon tests: Navajo and Zuni





Time magazine, Monday, Jul. 08, 1957
THE PRESIDENCY: The Clean Bomb


"Three of the nation's leading atomic scientists were ushered into the White House one morning last week by Atomic Energy Commission Chairman Lewis Strauss for a 45-minute conference with the President. The scientists: Edward Teller, credited with the theoretical discovery that led to a successful H-bomb, Ernest O. Lawrence, Nobel Prizewinning director of the University of California's radiation laboratory at Livermore, Calif., and Mark M. Mills, physicist and head of the lab's theoretical division. They brought a report of grave but potentially hopeful meaning. In the lab at Livermore, they told the President, scientists have found how to make H-bombs that will be 96% freer from radioactive fallout than the first models."


Above: clean-nuclear weapons physicist Dr Mark M. Mills testifying during his testimony before the Congressional Joint Atomic Energy Committee hearings on The Nature of Radioactive Fallout and Its Effects on Man, 1957 (the testimony is linked here in PDF format). Dr Mills was tragically killed in a helicopter accident during torrential rain on April 6, 1958, during the test preparations at Eniwetok Atoll, so the successful secret 1956 very "clean" test of the 4.5 Mt, 95% fusion, 5% fission Redwing-Navajo test was never publically demonstrated in the scheduled repeat Pinon test in 1958 (for comprehensive technical details of the Pinon, see Dr Gerald Johnson's 1958 Handbook for United Nations Observers, Pinon Test, Eniwetok, report UCRL-5367, PDF file linked here). After Dr Mills died, the public proof-testing of the clean bomb scheduled as Hardtack-Pinon was cancelled for weak reasons:

“[Lithium-7 deuteride and lithium-6 deuteride fusion fuel] costs can be estimated from the market price for lithium – with a lithium-6 content of 7.5 % - and with the advertised prices for heavy water [containing deuterium]. The latter sells for $28 per pound. ... the separation cost for lithium-6 ... should not be excessive since the isotopes Li-6 and Li-7 differ in mass by as much as 15 % and are therefore relatively easy to separate. My estimate for Li-6 D is $400 per pound. ... Making conservative assumptions about the fission yield in the [dirty U-238 fission] jacket, one concludes that a ton of TNT equivalent can be produced in the jacket for a fraction of one cent. ... It would undoubtedly be more expensive to construct a bomb without [a U-238 fission jacket]. And it would certainly be a much more difficult technical undertaking, since the success of Stage II [fusion of lithium deuteride] strongly depends upon the presence of the jacket. The neutron linkage and the cyclic nature of the multi-stage bomb make for a marriage between fission and fusion.”

- Dr Ralph E. Lapp, “The Humanitarian H-Bomb”, Bulletin of the Atomic Scientists, September 1956, p. 264.


Lapp's cynical complaint of the high relative cost of lithium-6 to U-238 in thermonuclear weapons ignores the fact that on average 50% of the yield of ordinary "dirty" stockpiled thermonuclear weapons comes from fusion anyway! By replacing the U-238 with lead, you're making no difference to the cost of the weapon: you're simply reducing the total yield and increasing the percentage due to fusion by a factor of 10 or more! In addition, Lapp ignores the fact that you simply don't need lithium-6 deuteride in a thermonuclear bomb: you can use natural lithium cheaply instead! In 1954, the highly efficient 15 Mt Castle-Bravo test used lithium only enriched to 40% lithium-6, while the successful 11 Mt Castle-Romeo test used only natural lithium deuteride (7.5% lithium-6 and 92.5% lithium-7), with no lithium enrichment. Sure, the heat released in the fission of the U-238 pusher by fusion neutrons acts as a catalyst to boost the fusion stage efficiency, and you lose that boost when you remove the U-238 jacket. But the successful tests of clean weapons prove that this is not an insuperable objection. Dr Samuel Glasstone, author of the secret nuclear weapon design physics report WASH-1037/8, 1962, 1963, 1972a and 1972b, wrote in his 1985 Funk & Wagnalls Encyclopedia (incorporated into Microsoft's Encarta 1998 multimedia encyclopedia) article on Nuclear Weapons:

"(E) Clean H Bombs

"On the average, about 50 percent of the power of an H-bomb results from thermonuclear-fusion reactions and the other 50 percent from fission that occurs in the A-bomb trigger and in the uranium jacket. A clean H-bomb is defined as one in which a significantly smaller proportion than 50 percent of the energy arises from fission. Because fusion does not produce any radioactive products directly, the fallout from a clean weapon is less than that from a normal or average H-bomb of the same total power. If an H-bomb were made with no uranium jacket but with a fission trigger, it would be relatively clean. Perhaps as little as 5 percent of the total explosive force might result from fission; the weapon would thus be 95 percent clean. The enhanced-radiation fusion bomb, also called the neutron bomb, which has been tested by the United States and other nuclear powers [1 kt, 500 metres altitude air burst] is considered a tactical weapon because it can do serious damage on the battlefield, penetrating tanks and other armored vehicles and causing death or serious injury to exposed individuals, without producing the radioactive fallout that endangers people or structures miles away."



Above: This newspaper article, "Clean H-Bomb Test Junked as U.S. Fears Mammoth Propaganda Dud", in The Deseret News, July 30, 1958, highlights the controversy in 1958 that fusion neutrons escaping into the atmosphere turn some nitrogen atoms into radioactive carbon-14, just as nuclear radiation from the sun does. But neutron-induced activities are a trivial hazard compared to the fission products from a "dirty" (high fission yield) surface burst. (See the declassified report USNRDL-TR-215, linked here, which contains the accurately measurements of the very small ratios of atoms/fission for neutron-induced Co-60 and other radionuclides in the 95% clean 1956 Navajo nuclear test fallout.) Another claim was that communist countries declined to attend the clean proof-test. But American could have still gone ahead and published the facts about clean nuclear weapons tests.


Above: neutron induced activities in atoms per fission depend upon bomb construction, particularly fission yield fraction ("cleanliness"). This table of data is based on the same source as Harold A. Knapp's 1960 table of accumulated doses from neutron induced activities in fallout (shown below), and is taken from the 1965 U.S. Naval Radiological Defense Laboratory report USNRDL-TR-1009 by Drs. Glenn R. Crocker and T. Turner. (This report is available as a 10 MB PDF download here. For Crocker's report on the fission product decay chains, see this link.)



Above: neutron induced activity gamma doses are smaller than fission product gamma doses, so "clean" nuclear weapons - despite releasing neutrons and creating some neutron induced activity - do eliminate much of the fallout problem.

Above: fallout from 95% 'clean' bomb test Navajo, Bikini Atoll, 1956 (WT-1317 ). Surface burst in the lagoon on a barge, the yield was 4.5 Mt, and it was only 5% fission. Each square in this and the next map has side of 20 minutes of latitude/longitude (= 20 nautical miles or 37 km). The radiation levels are relatively low, 20 times smaller than a fission weapon of similar yield.

Above is the best fallout pattern (from U.S. weapon test report WT-1317 by Drs. Terry Triffet and Philip D. LaRiviere) from the Zuni shot of 3.53 megatons, 15% fission at Bikini Atoll in 1956. It combines all available data, unlike the data in report DASA-1251, which gives unjoined data for the lagoon and the ocean. The ocean data was obtained in three ways, since fallout sinks in water. First, ships lowered probes into the water and measured the rate the fallout sank with time. Second, ships took samples of water from various depths for analysis. Third, the low level of radiation over the ocean was measured by both ships and aircraft, correcting for altitude and shielding of the geiger counter.

This particular test is unusual, as it was a surface burst on land (coral island), and was extensively studied; they even fired rockets through the different parts of the cloud at 7 and 15 minutes after burst, containing miniature radiation meters and radio transmitters, to map out the radioactivity distribution (it worked, showing toroidal distribution!). Ships were located in the fallout area at various locations to determine the fallout arrival time, build up rate (which was slow, due to the huge mushroom cloud which took time to pass overhead and diffused lengthways), decay rate after fallout arrival, mass of fallout and visibility of fallout deposit, and the chemical abundances of the various nuclides in fallout at different locations. Near the burst, large fallout particles arrive which fallout of the fireball before gaseous nuclides in decay chains have decayed into solids and condensed, so the biggest fallout particles, near ground zero, have relatively little I-131, Cs-137, and Sr-90. Gaseous precursors like xenon and krypton prevent Cs and Sr decay chains from condensing early, while iodine is volatile itself. Smaller fallout particles, while posing an overall smaller radiation hazard, have relatively more of these internal hazards (I-131 concentrates in the thyroid gland if ingested, say by drinking milk, while Cs-137 goes into muscle and Sr-90 goes into bone, assuming it is in a soluble form, which is of course not the case if the ground burst is on silicate-based soil, because the radioactivity is then trapped inside glass spheroids).

Here is a report of Dr. Hans A. Bethe, working group chairman, originally 'Top Secret - Restricted Data', to the President's Science Advisory Committee, dated 28 March 1958, defending 'clean nuclear weapons tests', courtesy of Uncle Sam:

http://www.hss.energy.gov/healthsafety/ihs/marshall/collection/data/ihp1b/7374_.pdf

Pages 8-9 defend clean nuclear weapons! As stated, Zuni was only 15% fission, so it was 85% clean. The dose rates given on these fallout patterns are extrapolated back to 1 hour, before the fallout had completely arrived anywhere, so are far higher than ever occurred anywhere! The true dose rates are lower due to decay during wind-carried transit. The dose rates also refer to the equivalent levels on land, which are about 535 times higher than over ocean at 2 days after burst, because the fallout landing on the ocean sinks steadily, and the water shields most of the radiation. The average decay rate of the fallout was t^-1.2 for all weapons tests. It is amazing how much secrecy there was during the cold war over thecivil defence data in WT-1317. The point is, fallout is not as bad as some people think, just like blast and cratering.

Co-60 bomb research

Wikipedia insert: Extensive residual radioactivity experiments and civil defence fallout studies were made during these tests. The Antler-1 test contained normal cobalt-59 which upon neutron capture was transmuted into radioactive cobalt-60 [1]. This provided a way to measure the neutron flux inside the weapon, although it was also of interest from the point of view of radiological warfare.

The then Science Editor of the New York Times, William L. Laurence, wrote in his 1959 book Men and Atoms (Simon & Schuster, New York, p. 195):

‘Because the cobalt bomb could be exploded from an unmanned barge in the middle of the ocean it could be made of any weight desired ... Professor Szilard has estimated that 400 one-ton deuterium-cobalt bombs would release enough radioactivity to extinguish all life on earth.’

The total amount of gamma ray energy emitted from cobalt-60 is only 2.82 MeV and this meagre energy release is spread over a statistical mean time of 1.44 times the 5.3 years half life of cobalt-60. (The number 1.44 is given approximately by 1 over the natural logarithm, i.e., the log to the base e, of 2, since this is the conversion factor between half-life and mean life time for radioactivity.) For comparison, every neutron used to fission an atom of U235, Pu239, or U238 releases 200 MeV of energy, including 30 MeV of residual radioactivity.

Hence, fission is by far the most efficient way to create radioactive contamination. The dose rate from Co-60 in the Antler-1 fallout was insignificant until most of the fission products had decayed, and only a few large pellets of Co-60 were found afterwards. The overall contribution of Co-60 to the fallout radiation was trivial compared to fission products and shorter-lived neutron induced activities in the bomb materials.

A study was done into the penetration of the fallout gamma radiation from the Antler tests by British Home Office and Atomic Weapons Research Establishment scientists A. M. Western and H. H. Collin in Maralinga. The results in their AWRE paper Operation Antler: the attenuation of residual radiation by structures, were published in Fission Fragments No. 10, June 1967, and showed the the long-term integrated fallout gamma radiation doses were reduced by a factor of 5 for a mass shielding of 312.4 kg per square metre, which is equivalent to a thickness of 15 cm of earth. A mass shielding of 781.1 kg per square metre stopped 96.6 % of the gamma rays, and this is equivalent to a protection factor of more than 29 by a shield of 38 cm of earth. America also performed studies which showed how fallout problems can be avoided.

Additional information:

NEUTRON CAPTURE-INDUCED NUCLIDES IN FALLOUT

Dr Terry Triffet and Philip D. LaRiviere, Operation Redwing, Project 2.63, Characterization of Fallout, U.S. Naval Radiological Defense Laboratory, 1961, Secret – Restricted Data, weapon test report WT-1317, Table B.22: some 21 radioactive isotopes of 19 different radioactive decay chains from neutron induced activity are reported for megaton range tests Navajo (lead pusher, 5 % fission), Zuni (lead pusher, 15 % fission) and Tewa (U-238 pusher, 87 % fission) are reported. Summing all the 19 separate decay chains abundances (of radioactive capture atoms per fission) gives results of:

15.6 atoms/fission for Navajo (5 % fission),

7.03 atoms/fission for Zuni (15 % fission), and

1.25 atoms/fission for Tewa (87 % fission).

(These data are computed from a full table which includes some nuclides not listed in WT-1317. I'll give that complete listing later. At present data tables do not seem to format properly on this blog site.)

But even for a very 'clean' bomb like Navajo, fission products dominate the fallout radiation dose. The sodium isotope Na-24 (15 hours half life) is generally considered the most important environmental form of neutron induced activity, and the abundance of Na-24 was only 0.0314 atom/fission in Navajo, 0.0109 atom/fission in Zuni, and 0.00284 atom/fission in Tewa. (These tests all involved large quantities of sea water being irradiated, Navajo and Tewa were water surface bursts and Zuni was on a small island surrounded by ocean.)

Far more important were U/Np-239, -240 and U-237 (which is created by a reaction whereby one neutron capture in U-238 results in two neutrons being emitted). The capture atoms/fission for Navajo, Zuni and Tewa were respectively 0.04, 0.31 and 0.36 for U/Np-239, 0.09, 0.005, and 0.09 for U/Np-240, and 0.09, 0.20 and 0.20 for U-237. (See also USNRDL-466.) These can emit as much radiation as fission products at the intensely critical early times of 20 hours to 2 weeks after detonation. They emit very low energy gamma rays, so the average energy of fallout gamma rays for a bomb containing a lot of U238 is low during the sheltering period, 0.2-0.6 MeV, and this allows efficient shielding to be done far more easily than implied by most civil defence calculations (which are based on gamma radiation from fission products with mean gamma energy of 0.7-1 MeV).

This was first pointed out based on British nuclear test fallout data (for Operation Totem and other tests) by George R. Stanbury in a Restricted U.K. Home Office Scientific Advisory Branch report in 1959, The contribution of U239 and Np239 to the radiation from fallout (although this paper originally contained a few calculation errors, the point that the average fallout gamma ray energy is lower than for fission products stands). You get much better shielding in a building that American calculations show, due to their incorrect use of 0.7-1 MeV mean gamma ray energy. The mean gamma ray energy at 8 days after Castle tests was only 0.34 MeV (WT-934 page 56, and WT-915 page 145; see also WT-917 pages 114-116, and also see of course WT-1317).

When tritium fuses with deuterium to produce helium-4 plus a neutron, the neutron’s mass is 20% of the total product mass, so the complete fusion of a 1 kg mixture of deuterium and tritium yields 0.2 kg of free neutrons, which – if all could be captured by cobalt-59 – would create 12 kg of Co-60. This was Professor Szilard’s basis for a ‘doomsday’ device.

However, Dr Gordon M. Dunning (b. 1910) of the U.S. Atomic Energy Commission, who was responsible for radiological safety during 1950s American tests, published calculations for such ‘cobalt-60 bombs’ (Health Physics, Vol. 4, 1960, p. 52). These show that a 100 megaton bomb with a thick cobalt-59 case, burst at a latitude of 45 degrees North, would produce an average Co-60 infinite-time gamma radiation exposure outdoors of 17 Roentgens in the band between 30 and 60 degrees North, around the earth. This ignores weathering of fallout, and assumes a uniform deposition.

The maximum rate at which this exposure would be received (outdoors), is 0.00025 Roentgens per hour, only 12 times greater than background radiation. Choosing a longer half-life reduces the intensity by increasing the time lapse between each particle emission; so the longer the half-life, the lower the intensity. If it is decaying rapidly, you can shelter while it decays. If the half-life is long, you can decontaminate the area before receiving a significant dose. No problem!

Creating Co-60 inside a weapon uses up precious neutrons, without releasing any prompt energy to help the nuclear fusion process, unlike U-238 fission, which releases both prompt energy and neutrons. Every neutron captured by Co-59 to produce radioactive Co-60 will lead to the release of only 2.82 MeV of radiation energy (one beta decay and two gamma rays). However, every neutron induced fission of uranium-238 releases about 200 MeV of energy, including more residual radiation energy than that released from Co-60. Therefore, fission gives a greater hazard than that from Co-6o and other neutron capture activities.

All of the escaping neutrons in an underwater or underground burst are captured in the water or soil, but only about 50% are captured by the water or soil in a surface burst. The amounts of neutron induced activity from the environment generally have a small effect, the highest activity being due to Na-24. In bombs containing U-238, the major neutron capture nuclides are Np-239 and U-237, which give off low energy gamma rays for the first few days and weeks. Shielding this radiation is easy.

The use of tungsten (W) carbide ‘pushers’ for clean nuclear weapons led to the discovery of W-185 (74 days half-life) in fallout from the 330 kt Yellowwood water surface burst at Eniwetok, 26 May 1958. It emits very low energy (0.055 MeV) gamma rays. Yellowwood produced 0.32 atoms of W-185 per fission, based on the ratio of W-185 to Zr-95 (assuming 0.048 atoms of Zr-95 per fission) in the crater sludge at 10 days after burst. (Frank G. Lowman, et al., U.S. Atomic Energy Commission report UWFL-57, 1959, p. 21.) W-185 was discovered on plankton and plant leaves, but was not taken up by the sea or land food chains. In fallout from the 104 kt, 30% fission Sedan shot at Nevada on 6 July 1962, W-187 (24 hours half-life) gave 55% of the gamma dose rate at 24 hours after burst, compared with 2% from Na-24 due to neutron capture in soil.

The ocean food-chain concentrates the neutron-capture nuclides iron (Fe) and zinc (Zn) to the extent that Fe-55 and Zn-65 constituted the only significant radioactivity dangers in clams, fish and birds which ate the fish after nuclear tests at Bikini and Eniwetok Atolls, during the 1950s. However, these nuclides are not concentrated in land vegetation, where the fission products cesium (which is similar to potassium) and strontium (which is similar to calcium) are of major importance. This is caused by the difference between the chemical composition of sea water and land. (Where necessary chemical elements are abundant, uptake of the chemically similar radioactive nuclide is greatly reduced by dilution.)

Fish caught at Eniwetok Atoll, a month after the 1.69 Mt Nectar shot in 1954, had undetectably low levels of fission products, but high levels of Fe-55 (95% of activity), Zn-65 (3.1%), and cobalt isotopes. In terns (sea birds) at Bikini Atoll, Zn-65 contributed almost all of the radioactivity after both the 1954 and 1956 tests. Fe-55 gave off 73.5% of the radioactivity of a clam kidney collected in 1956 at Eniwetok, 74 days after the 1.85 Mt Apache shot; cobalt-57, -58, and –60 contributed 9.6, 9.2, and 1.8%, while all of the fission products only contributed 3.5%.

Fish collected at Bikini Atoll two months after the 1956 Redwing series which included Zuni, Navajo and Tewa, had undetectably low levels of fission products, but Zn-65 contributed 35-58% of the activity, Fe-55 contributed 15-56%, and cobalt gave the remainder. (Frank G. Lowman, et al., U.S. Atomic Energy Commission report UWFL-51, 1957.)

In 1958, W.J. Heiman of the U.S. Naval Radiological Defense Laboratory released data on the sodium-24 activity induced in sea water after an underwater nuclear explosion in which 50 % of the gamma radiation at 4 days after burst is due to Np-239. He found that Na-24 contributed a maximum of 7.11 % of the gamma radiation, at about 24 hours after burst (Journal of Colloid Science, Vol. 13, 1958, pp. 329-36).

Hence even in a water burst, Np-239 radiation is far more important than Na-24.

Perhaps the most important modification in the April 1962 edition of The Effects of Nuclear Weapons was the disclosure that the radioactive fallout from nuclear weapons contains substantial amounts of radioactive nuclides from neutron capture in U-238. This had been pointed out by scientist George Stanbury (who worked with data from nuclear tests, and had attended British nuclear tests to study the effects) of the British Home Office Scientific Advisory Branch in report A12/SA/RM 75, The Contribution of U239 and Np239 to the Radiation from Fallout, November 1959, Confidential (declassified only in June 1988). Both Mr Stanbury and The Effects of Nuclear Weapons 1962 found 40% of the gamma radiation dose rate from fallout is the typical peak contribution due to Neptunium-239 and other capture nuclides (e.g., U-237, which is formed by an important reaction whereby 1 neutron capture in U-238 is followed by 2 neutrons being released), which all emit very low energy gamma radiation, and are important between a few hours and a few weeks after burst, i.e., in the critical period for fallout sheltering.

Because of the low energy of the gamma rays from such neutron-capture elements, which are present in large quantities in both Trinity-type fission bombs (with U-238 tampers) and thermonuclear bombs like Mike and Bravo, the fallout is much easier to protect against than pure fission products (average gamma energy 0.7 MeV). However, The Effects of Nuclear Weapons, while admitting that up to 40% of the gamma radiation is from such nuclides, did not point out the effect on the gamma energy and radiation shielding issue, unlike Stanbury’s Confidential civil defence report. This discovery greatly stimulated the “Protect and Survive” civil defence advice given out in Britain for many years, although it was kept secret because the exact abundances of these bomb nuclides in fallout were dependent on the precise bomb designs, which were Top Secret for decades.


NEUTRON CAPTURE-INDUCED NUCLIDES IN FALLOUT

Scroll down for the table. There is an error with this blog system changing html tables by prefixing them with large unwanted empty spaces. I'll fix this issue when I have time.































































































































































Nuclides formed by neutron capture in the thermonuclear bomb, 189 metric tons steel barge (NAVAJO AND TEWA TESTS), and the surrounding sea water


Measured Bikini Atoll test data for thermonuclear weapon designs of various fission yields, and two types of fusion charge ‘pusher’*




Nuclide




Half-life


Exposure rate at 1 hour after detonation, (R/hr)/(kt/mi2) per capture atom/fission. 3 ft height, ideal theory, Triffet 61.


Redwing-Navajo


4.50 Mt, 5% fission


Lead (Pb) pusher


Bomb mass = 6.80 metric tons


Redwing-Zuni


3.53 Mt, 15% fission


Lead (Pb) pusher


Bomb mass = 5.51 metric tons


Redwing-Tewa


5.01 Mt, 87% fission


Uranium-238 pusher


Bomb mass = 7.14 metric tons


Abundance of neutron induced nuclides in total fallout, atoms per fission:


Na-24


15.0 hours


1284.7


0.0314


0.0109


0.00284


Cr-51


27.7 days


0.280


0.0120


0.00173


0.000297


Mn-54


312 days


0.614


0.10


0.011


0.00053


Mn-56


2.58 hours


2668


0.094


0.010


0.00053


Fe-55


2.73 years


0.00416


14.9


6.05


0.573


Fe-59


44.5 days


6.19


0.0033


0.00041


0.000167


Co-57


271 days


0.113


0.00224


0.0031


0.000182


Co-58


70.9 days


3.11


0.00193


0.0036


0.000289


Co-60


5.27 years


0.299


0.0087


0.00264


0.00081


Cu-64


12.7 hours


89.5


0.0278


0.0090


0.00228


Zn-65


244 days


0.531


0.00435


0.00720


0.0000489


Sb-122**


2.71 days


38.4


0


0.219


0


Sb-124**


60.2 days


6.92


0


0.073


0


Ta-180


8.15 hours


35.9


0.038


0.0411


0.01


Ta-182


115 days


2.67


0.038


0.0194


0.01


Pb-203


2.17 days


26.0


0.0993


0.050


0.0000178


U-237


6.75 days


6.50


0.09


0.20


0.20


U-239


23.5 minutes


173


0.04


U-239 ®
Np ®
Pu


0.31


U-239 ®
Np ®
Pu


0.36


U-239 ®
Np ®
Pu


Np-239


2.35 days


14.9***


U-240


14.1 hours


0 (no gamma rays)


0.09


U-240 ®
Np ®
Pu


0.005


U-240 ®
Np ®
Pu


0.09


U-240 ®
Np ®
Pu


Np-240


7.22 minutes


150


Total amount of neutron induced activity (capture atoms per fission):







* Compiled from the data in: Dr Terry Triffet and Philip D. LaRiviere, Operation Redwing, Project 2.63, Characterization of Fallout, U.S. Naval Radiological Defense Laboratory, 1961, originally Secret – Restricted Data (now unclassified), weapon test report WT-1317, Table B.22 and Dr Carl F. Miller, U.S. Naval Radiological Defense Laboratory report USNRDL-466, 1961, Table 11 on page 41, originally Secret – Restricted Data (now unclassified). The ‘pusher’ absorbs initial x-ray energy and implodes, compressing the fusion charge. Data for Fe-55 is based on the ratios of Fe-55 to Fe-59 reported by Frank G. Lowman, et al., U.S. Atomic Energy Commission report UWFL-51 (1957), and H.G. Hicks, Lawrence Livermore National Laboratory report UCRL-53505 (1984), assuming that the neutron capture ratios in iron were similar for shots Apache and Tewa. Data for Zn-65 is based on the ratios of Zn-65 to Mn-54 reported by F.D. Jennings, Operation Redwing, Project 2.62a, Fallout Studies by Oceanographic Methods, report WT-1316, Secret – Restricted Data, 1961, pages 115 and 120.

** The Zuni device contained antimony (Sb), which boils at 1750 C and was fractionated in the fallout. This is the only fractionated neutron capture nuclide. The data shown are for unfractionated cloud samples: for the close-in fallout at Bikini Lagoon the abundances for Sb-122 and Sb-124 are 8.7 times smaller.


***Note that this is not the maximum exposure rate from Np-239 (at 1 hour after detonation it is still increasing because it is the decay product of U-239).



“The first objection to battlefield ER weapons is that they potentially lower the nuclear threshold because of their tactical utility. In the kind of potential strategic use suggested where these warheads would be held back as an ultimate countervalue weapon only to be employed when exchange had degenerated to the general level, this argument loses its force: the threshold would long since have been crossed before use of ER weapons is even contemplated. In the strategic context, it is rather possible to argue that such weapons raise the threshold by reinforcing the awful human consequences of nuclear exchange: the hostages recognize they are still (or once again) prisoners and, thus, certain victims.”


- Dr Donald M. Snow (Associate Professor of Political Science and Director of International Studies, University of Alabama), “Strategic Implications of Enhanced Radiation Weapons”, Air University Review, July-August 1979 issue (online version linked here).


“You published an article ‘Armour defuses the neutron bomb’ by John Harris and Andre Gsponer (13 March, p 44). To support their contention that the neutron bomb is of no military value against tanks, the authors make a number of statements about the effects of nuclear weapons. Most of these statements are false ... Do the authors not realise that at 280 metres the thermal fluence is about 20 calories per square centimetre – a level which would leave a good proportion of infantrymen, dressed for NBC conditions, fit to fight on? ... Perhaps they are unaware of the fact that a tank exposed to a nuclear burst with 30 times the blast output of their weapon, and at a range about 30 per cent greater than their 280 metres, was only moderately damaged, and was usable straight afterwards. ... we find that Harris and Gsponer’s conclusion that the ‘special effectiveness of the neutron bomb against tanks is illusory’ does not even stand up to this rather cursory scrutiny. They appear to be ignorant of the nature and effects of the blast and heat outputs of nuclear weapons, and unaware of the constraints under which the tank designer must operate.”


- C. S. Grace, Royal Military College of Science, Shrivenham, Wiltshire, New Scientist, 12 June 1986, p. 62.

Physical understanding of the blast wave and cratering

(This post is being revised, corrected and updated as of 8 August 2009. Greek symbols for density, Pi, etc., will just appear as p in some browsers which do not support the character sets. The page displays correctly in Internet Explorer 7.)

ABOVE: peak overpressures in psi (pounds/sq. inch; 1 psi = 6.9 kilopascals, kPa) with distances scaled by the cube-root of yield to apply to a standard reference total yield of 1 kiloton. All tests shown are surface bursts, 1 kt to 14.8 Mt, which have an effective blast yield of about 1.68 times that in a free air burst (an air burst in sea level air well away from any solid reflective surface). Data from WT-934 (1959), page 29, and have been scaled to 1 atmosphere ambient air pressure and 20 C ambient air temperature.

A shock wave is caused by the rapid release of either compressed fluid or energy, explosively heating and compressing fluid. A ‘blast wave’ is a shock wave in air, a compressed shock front accompanied by a blast of outward wind pressure. The shock front has an abrupt pressure rise because the air at the shock front is travelling into cold air which reduces its speed, while the hot air inside the shock front moves out faster, catching up with it to converge in the wall of compressed air. Within this the overpressure region or shock front, wind travels outward from the explosion, but within the inner area of low pressure, wind blows in the opposite direction, towards ground zero, allowing the return of air into the partial vacuum in the middle. At any fixed location, the blast first blows outward during the overpressure phase, and then reverses and blows inwards at a lower speed but for a longer duration during the ‘suction’ phase. Overpressure, p, acts in all directions within the shock front and is defined as the excess pressure above the normal atmospheric pressure (which is on average 101 kPa or 14.7 pounds per square inch at sea level). Dynamic pressure, q, acts only in the direction of the outward or reversed blast winds accompanying the shock wave, and is the wind pressure, exactly equivalent to a similar gust of wind with the same velocity and duration.

The blast wave must engulf, heat and compress all of the air that it encounters as a result of its supersonic spherically divergent expansion. Consequently, its energy is continuously being distributed over a larger mass of air that rapidly reduces the energy available per kilogram of air, so the overpressure drops rapidly. Some energy is lost in surface bursts in forming a crater and melting a thin layer of surface sand by conduction and radiation. Initially the shock front is also losing energy by the emission of thermal radiation to large distances. When the blast wave hits an object, the compressed shock front exerts an all-round crushing-type overpressure, while the outward blast wind contributes a hammer blow that adds to the overpressure, followed by a wind drag to roof materials, vehicles, and standing people. The total force exerted by the blast is equal to pressure multiplied by exposed surface area, but if the object is sufficiently rigid to actually stop and reflect the shock wave, then it collides with itself while being reflected, reducing its duration but increasing its peak pressure. P. H. Hugoniot in 1887 derived the basic equations governing the properties of a gaseous shock wave in a piston, relationships between density, pressure and velocity. Lord Kelvin later introduced the concept ‘impulse’ (the time-integrated pressure of a fluid disturbance), when he was working on vortex atom theory.

The peak pressure in the air blast wave has 4 contributions: the ambient pressure, the isothermal sphere, the shock front and the sonic wave. These are represented by terms including the factors P0, 1/R3, 1/R2, and 1/R, respectively, where P0 is the ambient (normal) air pressure at the altitude of interest and R is the distance from the explosion. The equation of state for air gives the base equation for the total pressure, P = (g - 1) E/V, where g = 1.4 is the ratio of specific heat capacities of air (at high temperatures it can drop to 1.2 owing to the vibration energy of molecules, while molecular dissociation into atoms increases it towards 1.67, for a monatomic gas; these two offsetting effects keep it at 1.4), E is the total blast energy and V is the blast wave volume. We now discover a generalised summation using dimensional analysis that automatically includes all of the four separate blast wave terms, already discussed

P= Ã¥ {[(g- 1)E/V]n/3P01 –(n/3)}, where the summation is for: n = 0, 1, 2, and 3.

For a free air burst, V = (4/3)p R3, so for g = 1.4, R in km, and blast yield X kilotons:

P = P0 + (0.00737X1/3P02/3/R) + (0.0543X2/3P01/3/R2) + (0.400X/R3) kilopascals (kPa).

For high altitude bursts, the air pressure at altitude H km is P0 = 101e-H/ 6.9 kPa. For sea level air, P0 = 101 kPa, so the peak overpressure, p = P – P0, is:

p = (16.0X1/3/R) + (2.53X2/3/R2) + (0.400X/R3) kPa

For direct comparison, the peak overpressure graph for American sea level free air bursts (DNA-EM-1, 1981, and The Effects of Nuclear Weapons, 1977, Fig. 3.72) implies:

p = (3.55W1/3/R) + (2.00W2/3/R2) + (0.387W/R3) kPa,

where W is the total weapon yield in kilotons. In deriving this formula, we produced fits to both the surface burst and free air burst curves, and averaged them to find an effective yield ratio of 1.68 for surface bursts to free air bursts (due to reflection by the ground in a surface burst, which results in a hemispherical blast with nearly double the energy density of a free air burst, with some energy loss due to surface interaction effects like melting the surface layer of sand into fused fallout particles, ground shock and cratering). This comparison of theory and measurement shows close agreement for the 1/R2 and 1/R3 high overpressure terms, where the exact blast yield fractions are 0.703 and 0.968, respectively. The fraction of the explosion energy in blast is highest at high overpressures where the shock front has not lost much energy by radiation or degradation; but for the weak or sonic blast wave (1/R term) the fraction is only 0.0109 owing to these losses. The American book, The Effects of Nuclear Weapons (1957-77 editions) gives a specific figure of 50% for the sea level blast yield, but this time-independent generalisation is a totally misleading fiction. It is obtained by the editors of the American book by subtracting the final thermal and nuclear radiation yields from 100%, neglecting blast energy that is dissipated with time for crater excavation, fallout particle melting, and the massive cloud formation. Initially, almost all of the internal energy of the fireball goes into the blast wave, but after the thermal radiation pulse, the blast or sonic wave eventually contains only 1.09% of the energy.

Note that the revised EM-1 manual and its summary by John A. Northrop, Handbook of Nuclear Weapons Effects (DSWA, 1996, p. 9), suggests a formula for free air bursts which has differences to that given above. Northrop's compilation and Charles J. Bridgman's Introduction to the Physics of Nuclear Weapons Effects, DTRA, 2001, p. 285) give for a free air burst:

p = (0.304W/R3) + (1.13W2/3/R2) + (1.00W1/3/[RA]) kPa,

where W is the total weapon yield in kilotons, R is in km, and A = {ln[(R/445.52) + 3*exp(-{R/445.52}1/2/3)]}1/2. Bridgman gives a graph of peak overpressures (Fig. 7-6 on p. 285) showing 500 kPa peak overpressure at 100 m, 30 kPa at 400 m, and 8 kPa at 1 km from a 1 kt (total yield) nuclear free air burst. [1 psi = 6.9 kPa.] He also reproduces the curves for dynamic and overpressure positive phase duration, Mach stem height, etc., from chapter 2 of Dolan's DNA-EM-1.

An alternative, simpler equation summarizing data on free air burst peak overpressures was presented in 1957 by the U.K. Home Office Scientific Advisory Branch physicist Frank H. Pavry in his paper 'Blast from Nuclear Weapons', in U.K. National Archives document HO 228/21 Report of a course given to university physics lecturers at the Civil Defence Staff College 8-11 July 1957:

P = (2640/R)*(1 + 500/R)2.4 psi,

where R is distance in feet (notice that 2640 feet is half a statute mile, 5280 feet). The numerical constants in this formula were only approximate in 1957, but it may be possible to update them with modern data.

The low energy in the blast wave at long ranges is consistent with the fact that the physically accurate cloud rise model in Hillyer G. Norment's report DELFIC: Department of Defense Fallout Prediction System, Volume I - Fundamentals, Atmospheric Sciences Associates, DNA-5159F-1 (1979) finds that for the mushroom cloud expansion observed it is required that 45% of the bomb yield ends up as the hot air in and surrounding the fireball (dumped from the back of the blast wave) which produces the convective mushroom cloud phenomena. This 45% figure is mainly blast wave energy left behind by the blast wave in the air outside the visibly glowing fireball region. If the blast wave energy remained in the shock front indefinitely, then there would be no mushroom cloud phenomena because the vast amount of energy needed wouldn't be available to cause it! That doesn't happen: the blast wave irreversibly heats up the air it engulfs, and continually dumps warmed air from the back of the blast wave which moves back into the near vacuum towards ground zero, causing the reversed wind direction (suction) phase, while the shock front is still moving outwards. The energy of the heated air forming these afterwinds is the main contributor to the mushroom cloud rise energy.

In a land surface burst, the blast volume for any radius is only half that of a free air burst, because the blast is confined to a hemispherical shape rather than a sphere. Therefore, in the event of an ideal, rigid, reflecting surface, the blast would be identical to that from a free air burst with twice the energy, or 2W. The effective yield of a surface burst on land or water, as determined from 70 accurate measurements at 7 American tests conducted in the Nevada and at Eniwetok and Bikini Atolls from 1951-4 (Sugar, Mike, Bravo, Romeo, Union, Yankee and Nectar), for scaled distances equivalent to 55-300 m from a 1 kiloton burst, is actually only 1.68W. Hence, about 16% of the energy of a surface burst goes into the ground/water shock wave, crater, and in melting fallout or vaporising seawater: if a sea level air burst has an effective blast yield of 50%, a surface burst has a blast yield of only 50*(1.68/2) = 42%.

Close to detonation, the fireball arrival time is theoretically proportional to (radius, r)5/2, but at great distances blast arrival time is equal to (r/c) – (R/c), where R is the thickness of the blast wave or head start and c is the sound velocity (this incorporates the boost the blast wave gets early on while it is supersonic). Using 1959 weapon test report WT-934 data from Sugar, Mike, and Operation Castle surface burst nuclear tests, with cube-root scaling of both the arrival times and distances [cube-root scaling is as (yield)1/3] to 1-kt, we combine both rules to obtain generalised, universal blast arrival time formula for 1-kt surface bursts:

t = r / [0.340 + (0.0350/ r3/2) + (0.0622/ r)] seconds,

where r is in km and the term 0.340 is the speed of sound in km/s. To use this equation for other yields (or for air bursts) it is just necessary to scale both the time and distance down to a 1-kt surface burst blast equivalent using the cube-root scaling law.

When a nuclear weapon is air burst, the blast wave along the ground is modified by the surface reflection (Nevada desert terrain reflects 68% of the blast energy) in which the reflected blast moves through air already heated by the direct blast, so moving faster and merging with it. The total energy of this merged blast wave will therefore be 1 + 0.68 = 1.68 times that in a free air burst at a similar distance in infinite air. Because the range of any blast pressure is proportional to the cube-root of the energy, this means that in a surface burst the ranges of the merged blast wave (Mach stem) will be (1.68)1/3 = 1.19 times greater than for a free air burst. This increase was observed in ordinary TNT bursts; but in nuclear explosions there are two further factors of importance, first seen at the 1945 Trinity test. First, nuclear bursts emit thermal radiation that heats the surface material, in turn heating the surface air by convection and allowing the blast wave to travel faster along the ground at higher overpressure. Hence, British nuclear test measurements of overpressures made with sensors on the tops of towers gave lower readings than American instruments close to the ground. Second, thermal radiation explodes the silicate sand crystals on a desert surface, like exploding popcorn, creating a very hot cloud of dust about 2-3 m high, called the ‘thermal layer’.

The blast ‘precursor’ which was filmed around the fireball in the 1945 Trinity nuclear test was caused by thermal radiation pop-corning the desert sand into a cloud of hot gas through which the blast wave then moved faster than through cold air (because hot air adds more energy to the blast than cold air). The density of the dust loading in the precursor increased the fluid (air) inertia, reducing the peak overpressure but increasing wind (dynamic) pressure (which is proportional to density). American measurements on the precursor blast in Nevada tests Met, Priscilla, and Hood allowed development of a mathematical model in 1995 which includes thermal pop-corning (blow-off) of the desert surface, thermal layer growth, blast modification and the prediction of precursor effects on the waveforms of overpressure and dynamic pressure. This model was produced in secret for section 2 of Chapter 2 in Capabilities of Nuclear Weapons, EM-1: ‘Air Blast over Real (non-ideal) Surfaces’.

When the blast travels through this layer it billows upwards to 30 m in height, and the overpressure is actually reduced to 67% of normal because the mass of dust loading increases the air’s inertia. But the dynamic/drag pressure is increased by several times because it is proportional to the new higher air density (including dust), and dramatically increases ranges of destruction to wind-drag sensitive targets! This occurs in surface bursts of over 30 kt yield and in air bursts within 240W1/3 m of silicate or coral sand, where W is yield in kt; precursors occurred over coral islands in the 14.8 Mt Bravo test of 1954. The maximum ground range to which precursors are observed in bursts over sandy ground is 350W1/3 m. No precursor has been observed over water or ground covered in white smoke. Concrete, ice, snow, wet ground, and cities would generally reflect the thermal flash and not produce a thermal precursor. The precursor is most important at high overpressures where the thermal heating effect is greatest: no precursor or blast pressure change effect occurs below 40 kPa peak overpressure. A precursor will reduce a predicted 70 kPa peak overpressure to 84%, will reduce a predicted 85 kPa to 80%, will reduce 140 kPa to 75%, and will reduce predicted 210-3,500 kPa to 67% (Philip J. Dolan, ‘Capabilities of Nuclear Weapons’, Pentagon, DNA-EM-1, Fig. 2-21, 1981).

In 1953, interest focussed on the increased drag damage due to vehicles and wind sensitive targets exposed to the precursor from the Grable test. In 1955 it was discovered at the Teapot Nevada tests that the temperature of the precursor dust cloud reached 250 C at 40 milliseconds after the arrival of the blast wave (U.S weapon test report WT-1218). The hot precursor dust burned the skin of animals in an open shelter (protecting against thermal radiation) at 320 m from a 30-kt tower burst (report WT-1179). Japanese working in open tunnel shelters 90 m from ground zero at Nagasaki reported skin burns from the blast wind, but their overhead earth cover shielded out the radiation. At 250 C, skin needs exposure for 0.75 second to produce skin reddening, 1.5 seconds to produce blistering, and 2.3 seconds to cause charring (at 480 C, these exposure times are reduced by a factor of 10).

Small rain or mist droplets (0.25 cm/hour rainfall rate) and fog droplets are evaporated by the warm blast wave, reducing the peak overpressure and overpressure duration each by about 5 %. This was observed in TNT bomb tests in 1944 (Los Alamos report LA-217). Large droplets in heavy rainfall (at 1.3 cm/hour) are broken up by the blast before evaporating, which causes a 20 % reduction in peak overpressure. This was observed when heavy rainfall occurred over part of Bikini Atoll during the 110 kt Koon nuclear test in 1954; comparison of peak overpressures on each side of ground zero indicated a 20% reduction due to localised heavy rain (report WT-905).

Dr William Penney who measured blasts from the early American nuclear tests and was test director during the many Australian-British tests at Monte Bello, Emu Field and Maralinga, in 1970 published the results (Phil. Trans. Roy. Soc. London, v. 266A, pp. 358-424): ‘nuclear explosives cause the air near the ground to be warmed by heating through the heat flash.’ This has two important implications that are ignored by the American publications on blast. First, since the heat flash scales more rapidly than the cube root of yield (which is used for blast), the thermal enhancement increases out of step (so test data from 30-kt bursts show more thermal enhancement than 1-kt tests). Second, Penney had blast gauges both at ground level and on poles 3-m above the ground in Maralinga, where there was red desert soil that readily absorbed the heat flash. The peak overpressures at ground level are significantly higher than at 3-m height. The average pressure, causing the force loading and the damage, to a 10-m high building is therefore less than that measured at ground level.

At 408 m a 1-kt burst at 250-m altitude, Penney points out that his scaled data for a marked thermal layer effect (red desert soil) gives 58-kPa, whereas the American government manual gave 77-kPa for ‘nearly ideal’ conditions, an increase of over 30%. Penney’s data for no thermal effect gave 71-kPa, indicating that the American test data had been scaled down from a higher yield than the British test, where thermal heating was greater. Ignoring thermal flash absorption for the short ranges of interest, the thermal energy ranges scale in proportion to W/r2 where W is yield and r is distance, while the blast ranges are scaled by W1/3, so the thermal energy received at any given scaled blast range varies as W/(W1/3)2 = W1/3. Therefore, when serious thermal heating occurs, the peak overpressures scale up with yield in addition to distances. There is little effect in a surface burst (unless the fireball is very large) because thermal is then emitted parallel to the ground and is not absorbed by the ground, and American high yield tests occurred over transparent water which did not heat up at the surface. A 10-Mt air burst detonation over dark coloured ground would deposit 10 times as much thermal energy on the ground at the scaled blast ranges measured for 10-kt tests in America and Australia, so there would be much greater thermal enhancement of the blast ranges.

In addition to this fact about blast data analysis from nuclear tests, there is another point made by Penney. The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’

A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.

Mathematical representation of ideal pressure-time curves

In general, Dr Brode’s empirical and semi-empirical formulae are extremely useful, but there are problems when it comes to the pressure-time form factors. Brode uses the sum of three exponential terms to represent the family of pressure-time curves in the positive (compression) phase for a location receiving any particular peak overpressure. The issue we have with Brode is that the analytically correct physical theory gives a much simpler formula, and this illustrates the issue between the use of computers and the use of physical understanding. The time-form graphs given by Brode in his 1968 article do not correlate with the formulae he provides or with Glasstone and Dolan 1977, although they do correlate with Glasstone 1962/4.

The general form of Brode’s formula is like Pt = Pmax (1 – t/Dp+)(xe-at + ye-bt + ze-ct). The decay constants like a, b and c are themselves functions of the peak overpressure. It is thus very complex. Pt is the time-varying overpressure, Pmax is the peak overpressure, t is time measured from blast arrival time (not from detonation time!), and Dp+ is the positive phase overpressure duration.

Now consider the actual physics. The time decay of overpressure at a fixed location while the blast wave passes by in a shock-tube (a long, uniform, air filled cylinder) where the blast is unable to diverge sideways as it propagates, is Pt = Pmax (1 – t/Dp+)e-at. In a real air burst, however, the pressure decays additionally by divergence with time as air has another dimension in which to fall off (sideways). This dimension is the transverse dimension, i.e., circumference C, which is proportional to the radius r of the blast by the simple formula C = 2pr. In other words, as the blast sphere gets bigger, the pressure falls off everywhere because there is a greater volume for the air to fill. We are interested in times not radii or circumference, but blast radius is approximately proportional to the time after detonation. Hence, we can adapt the shock-tube blast decay formula for the additional fall caused by sideways divergence of the expanding blast by dividing it by a normalised function of time and pressure (unity is added in the denominator because t is time after blast arrival, not time after explosion):

Pt = Pmax [(1 – t/Dp+)e-at] / [1 + 1.6(Pmax /Po).(t/ Dp+)]

This formula appears to model the pressure-time curves accurately for all peak overpressures (a ~ 0 if just considering the positive or compression phase, Po is ambient pressure). The fall of the wind pressure (dynamic pressure), q, is related to this decay rate of overpressure by standard relationships discussed by Glasstone and Dolan for the case where g= 1.4: qt = q (Pt /Pmax)2[(Pmax + 7Po)/ (Pt + 7Po)].

Cratering problems

From an earlier post:

‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies ... coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance ... Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.

The latest crater scaling laws are described in the report:
R. M. Schmidt, K. R. Housen and K.A. Holsapple, Gravity Effects in Cratering, DNA-TR-86-182, Defense Nuclear Agency, Washington D.C., 1988.

In the range of 1 kt – 10 Mt there is a transition from cube-root to fourth-root scaling, and the average scaling law suggested by Nevada soil and Pacific coral Atoll data, W0.3 (used by Glasstone and Dolan) was shown to be wrong in 1987 because empirical data was too limited (the biggest Nevada cratering test was Sedan, 104 kt) and the W0.3 empirical law ignored energy conservation at high yields, where gravity effects kick in and curtail the sizes predicted by hydrodynamic cratering physics.

The W0.3 scaling law used in Glasstone and Dolan 1977 is false because it violates the conservation of energy, used by the explosion in ejecting massive amounts of debris from the crater against gravity. The yield-dependent scaling for crater dimensions (radius and depth) transitions from the cube-root of yield scaling at low yields (below 1 kt) to fourth-root at high yields, because of gravity. At low yields, the fraction of the bomb energy used to physically dump ejecta out of the crater against gravity (to produce the surrounding lip and debris) is trivial compared to the hydrodynamic energy being used used to physically break up the soil. But at higher yields, the fact that the crater is deep means that a significant amount of bomb energy must now be employed to do work excavating earth against gravity.

Consider the energy utilisation in cratering. The total energy done by cratering is the sum of the hydrodynamic energy and gravitational work energy. The hydrodynamic term is shown to be proportional to the cube of the crater radius or depth, as shown by the reliability of cube-root scaling at subkiloton yields: the energy needed to hydrodynamically excavate a unit volume of soil by hydrodynamic cratering action is a constant, so the energy required for hydrodynamic pulverization of crater mass m is E = mX where X is the number of Joules needed in cratering for the hydrodynamic excavation of 1 kg of soil.

But where the crater is deep in bigger explosions, the gravitational work energy E = mgh needed to eject crater mass m the vertical distance h upwards out of the hole to the lip, against gravitational acceleration g (9.8 ms-2)becomes larger than the hydrodynamic energy needed to merely break up the matter, so the gravity work effect then governs the crater scaling law. The total energy used in crater formation is the sum of two terms, hydrodynamic and gravitational: E = (mX) + (mgh).

The (mX)-term is proportional to the cube of the crater depth (because m is the product of volume and density, and volume is proportional to depth-cubed if the crater radius/depth ratio is constant), while the (mgh)-term is proportional to the fourth-power of the crater depth because m is proportional to the density times the depth cubed (if the depth/radius ratio is constant) and h is always directly proportional to the crater depth (h is roughly half the crater depth), so the product mgh is proportional to the product of depth cubed and depth, i.e., to the fourth-power of crater depth. So for bigger craters and bigger bomb yields, a larger fraction of the total cratering energy then gets used to overcome gravity, causing the gravity term to predominate and the crater size to scale at most by W1/4 at high yields. This makes the crater size scaling law transition from cube-root (W1/3) at low yields to fourth-root (W1/4) at higher yields!

It’s fascinating that, despite the best scientific brains working on nuclear weapons effects for many decades - the Manhattan Project focussed a large amount of effort on the problem, and utilised the top physicists who had developed quantum mechanics and nuclear physics, and people like Bethe were still writing secret papers on fireball effects into the 1960s - such fundamental physical effects were simply ignored for decades. This was due to the restricted number of people working on the problem due to secrecy, and maybe some kind of ‘groupthink’ (psychological peer-pressure): not to upset colleagues by ‘rocking the boat’ with too much freethinking, radical questions, innovative ideas.

The equation E = mgh isn't a speculative theory requiring nuclear tests to confirm it, it's a basic physical fact that can be experimentally proved in any physics laboratory: you can easily measure the energy needed to raise a mass (the amount of electric energy supplied to an electric motor while it winches up a standard 1 kg mass is a simple example of the kind of physical fact involved). In trying to analyse the effects of nuclear weapons, false approximations were sometimes used, which then became imbedded as a doctrine or faith about the ‘correct’ way to approach or analyze a particular problem. People, when questioned about a fundamental belief in such analysis, then are tempted respond dogmatically by simply referring to what the ‘consensus’ is, as if accepted dogmatic religious-style authority is somehow a substitute science, which is of course the unceasing need to keep asking probing questions, checking factual details for errors, omissions and misunderstandings, and forever searching for a deeper understanding of nature.

For example, in the case of a 10 Mt surface burst on dry soil, the 1957, 1962, and 1964 editions of Glasstone's Effects of Nuclear Weapons predicted a crater radius of 414 metres (the 10 Mt Mike test in 1952 had a radius of over twice that size, but that was due to the water-saturated porous coral of the island and surrounding reef, which is crushed very easily by the shock wave at high overpressures). This was reduced to 295 metres in Glasstone and Dolan, 1977, when the scaling law was changed from the cube-root to the 0.3 power of yield. The 1981 revision of Dolan's DNA-EM-1 brings it down to 145 metres, because of the tiny amount of energy which goes into the bomb case shock for a modern, efficient 10 Mt class thermonuclear warhead (Brode and Bjork discovered this bomb design effect on cratering in 1960; high-yield efficient weapons release over 80% of their yield as X-rays which are inefficient at cratering because they just cause ablation of the soil below the bomb, creating a shock wave and some compression, but far less cratering action than the dense bomb case shock wave produces in soil). Then in 1987, the introduction of gravity effects reduced the crater radius for a 10 Mt surface burst on dry soil to just 92 metres, only 22% of the figure believed up to 1964!

‘It is shown that the primary cause of cratering for such an explosion is not “airslap,” as previously suggested, but rather the direct action of the energetic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. [Hence the immense crater size from the 10 Mt liquid-deuterium Mike test in 1952 with its massive 82 ton steel casing shock is irrelevant to compact modern warheads which have lighter casings and are more efficient and produce smaller case shocks and thus smaller craters.]’ - H. L. Brode and R. L. Bjork, Cratering from a Megaton Surface Burst, RAND Corp., RM-2600, 1960.


‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies… coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance… Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.

As L.G. Margolin states (above), improved understanding of crater data from the 1952-8 nuclear tests at Bikini and Eniwetok Atolls led to a reduction of predicted crater sizes from land bursts. The massive crater, 950 m in radius and 50 m under water (53 m deep as measured from the original bomb position), created by the 10.4 Mt Mike shot at Eniwetok in 1952, occurred in the wet coral reef surrounding an island because fragile water-saturated coral is pulverised to sand by shock wave pressure. Revised editions of the U.S. Department of Defence books The Effects of Nuclear Weapons and the classified manual Capabilities of Nuclear Weapons (secret) diminished the crater radius for a surface burst on dry soil:


In the 1957-64 editions, the crater radius was scaled by the well-proved TNT cratering ‘cube-root law’, W1/3, (which is now known to be valid where the work done by excavating against gravity is trivial in comparison to the work done in breaking up material). In the 1977 edition, the crater radius was scaled by less than the cube-root law, in fact the 0.3 power of yield, W0.3, in an effort to fit the American nuclear test data. Unfortunately, as shown in the following table, the American nuclear test data is too patchy for proper extrapolation to be made for dry soil surface bursts, because the one high yield (104-kt Sedan) Nevada explosive-type crater burst was buried at a depth of 194 m. This changes two sensitive variables at the same time, preventing reliable extrapolation.

*These bombs were at the bottom of the water tank, with 3 m of water above and around to increase the case-shock effect by X-ray absorption in water.

**650 kg device mass. The Cactus crater was in 1979 used to inter (under a concrete dome) some 84,100 m3 of contaminated topsoil and World War II munitions debris on Eniwetok Atoll in the American clean-up and decontamination work. The initial average height of the lip of this crater was 3.35 m.


During World War II, experiments showed that W kt of TNT detonated on dry soil produces a crater with a radius of 30W1/3 m. The radius of W kt of TNT is 5.4W1/3 m, or 18% of the dry soil crater radius. The crater is almost entirely due to the ‘case shock’ of a nuclear weapon, not the X-ray emission. This was discovered in the experiments with Koa and Seminole in water tanks to increase X-ray coupling to the ground (see table above). Nuclear weapons with yields below 2-kt (high mass to yield ratio, and low X-ray energy emission) which are surface burst produce craters similar to those from 23% of the TNT equivalent, while high-yield nuclear weapons (low mass to yield ratio, and high X-ray energy emission) which are surface burst produce craters similar to those from 2.9% of the TNT equivalent.

*These sizes apply to low yield-to-mass ratio nuclear warheads that incur a low X-ray energy emission. These produce the greatest craters, because most of the energy is initially in the case-shock of the bomb, rather than in X-rays (see below). These radii should be corrected for X-ray emission and total yield by the multiplying factor which can reasonably be taken to be 1.41(fW )1/3 (1 + 1.82W1/4 )-1/3, see below for derivation including gravitational effect at high yields. This factor is 1 for case-shock energy fraction f and total yield W kilotons both equal to 1. For pure fission warheads, f = 1. For a 1-megaton modern thermonuclear warhead, f = 1/8 because of the lower case-shock energy and higher proportional of energy in X-rays.

**These sizes apply to a different mechanism of cratering; namely the crushing of porous coral by the shock wave, so simple ‘cube-root’ scaling applies here.

About 72% of the energy entering the ground from a TNT explosion is used in cratering, while 28% is used in producing ground shock. The main ground shock from a surface burst nuclear explosion is derived from 7.5% of the total X-ray emission, which is absorbed by the ground within a radius of 3W1/3 m. The downward recoil of the ground in response to the explosive ablation of surface sand initiates a ground shock wave within a microsecond. The case shock of a nuclear weapon delivers 50% of its energy downward, which is all absorbed by the ground on account of its high density, and this is the principal crater mechanism. As debris is ejected from the crater in a cone shape, it absorbs some of the thermal radiation from the fireball within, and is melted, later becoming contaminated and being deposited as fallout. When nuclear weapons are detonated underground, the true TNT equivalent for a similar crater is 30% of the nuclear yield, because the X-rays cannot escape into the air, although a lot of energy is then wasted in melting and heating soil underground.

The long delay in nuclear effects people understanding crater scaling laws properly has an interesting history. Although Galileo identified craters on the moon using his telescope in 1609, it was only when a couple of astronauts from Apollo 14 visited an allegedly ‘volcanic lava crater’ (crater Fra Mauro) on the moon that they discovered the ejecta from a shallow explosion crater, without any volcanic lava. The idea of explosive cratering had been falsely discounted because physicists had observed very few craters on the earth and many on the moon. They had falsely assumed that the reason for this was strong volcanism on the moon, when it is really due to impact craters having been mostly eroded by geological processes on earth, and mostly preserved on the moon!

Early theoretical studies of crater formation, even using powerful computer simulations, employed explosion dynamics that ignored gravitation. Almost all of the books on the ‘effects of nuclear weapons’ in the public domain give nonsense for megaton surface bursts. It was only in 1986 that a full study of the effects of gravity in reducing crater sizes in the megaton range was performed: R. M. Schmidt, K. A. Holsapple, and K. R. Housen, ‘Gravity effects in cratering’, U.S. Department of Defense, Defense Nuclear Agency, report DNA-TR-86-182. In addition to secrecy issues on the details, the complexity of the unclassified portions of the new scaling procedures in this official treatment cover up the mechanisms, so here is a simple analytical explanation which is clearer:

If the energy used in cratering is E, the cratered mass M, and the explosive energy needed to physically break up a unit mass of the soil under consideration is X, then the old equation E = MX (which implies that crater volume is directly proportional to bomb yield and hence crater depth and diameter scale as the cube-root of yield) is completely false, as it omits gravitational work energy needed to shift soil from the crater to the surrounding ground.

This gravitational work energy is easy to estimate as ½ MgD, where M is the mass excavated, g is gravitational acceleration (9.8 m/s2 ), D is crater depth, and ½ is a rough approximation of the average proportionof the crater depth which displaced soil is vertically moved against gravity in forming the crater.

Hence the correct cratering energy not E = MX but rather E = MX + ½MgD. For yields well below 1-kt, the second term (on the right hand side) of this expression, ½ MgD, is insignificant compared to MX, so the volume excavated scales directly with yield, and since the volume is proportional to the cube of the average linear dimension, this means that the radius and depth both scale with the cube-root of yield for low yields.

But for very large yields, the second term, ½MgD, becomes more important, and this use of energy to overcome gravity in excavation limits the energy available for explosive digging, so the linear dimensions then scale as only the fourth-root (or quarter-power) of yield. Surface burst craters are paraboloid in shape, so they have a volume of: p R2 D/2 = (p /2)(R/D)2 D3, where the ratio of R/D is about 1.88 for a surface burst on dry soil. The mass of crater material is this volume multiplied by the density, r , of the soil material: M = rp(R/D)2 D3 /2.

Hence, the total cratering energy is: E = MX + ½ MgD = r (p /2)R2 D(X + ½gD).

The density of hard rock, soft rock and hard soil (for example granite, sandstone or basalt) is typically 2.65 kg/litre (2,650 kg per cubic metre), wet soil is around 2.10 kg/litre, water saturated coral reef is 2.02 kg/litre, typical dry soil is 1.70 kg/litre, Nevada desert is 1.60 kg/litre, lunar soil is 1.50 kg/litre (for analysis of the craters on the moon, where gravity is 6 times smaller than at the earth’s surface), and ice is 0.93 kg/litre.

The change over from cube-root to quarter-root scaling with increasing yield means that old crater size estimates (for example, those in the well-known 1977 book by Glasstone and Dolan, U.S. Department of Defence, 1977, The Effects of Nuclear Weapons) are far too big in the megaton range, and need to be multiplied by a correction factor.

The correction factor is easy to find. The purely explosive cratering energy efficiency, f, falls as gravity takes more energy, and is simply f = MX/(MX + ½MgD) = (1 + ½gD/X)-1.

Because gravity effects are small in the low and sub kiloton range, the correct crater radius for small explosions indeed scales hydrodynamically, as R ~ E1/3, so the 1-kt crater sizes in Glasstone and Dolan should be scaled by the correct factor R ~ W1/3(1 + ½ gD/X)-1/3 instead of by the empirical factor of R ~ W0.3 given by Glasstone and Dolan for Nevada explosion data of 1-100 kt. Glasstone and Dolan overestimates crater sizes by a large factor for megaton yield bursts. (The Americans had been mislead by data from coral craters, since coral is porous and is simply crushed to sand by the shock wave, instead of being excavated explosively like other media.

In megaton surface bursts on wet soft rock, the depth D increases only as W1/4, the ‘fourth root’ or ‘one-quarter power’ of yield scaling. Obviously for small craters, D scales as the cube-root of yield, but the correction factor (1 + ½ gD/X)-1/3 is only significant for the megaton range anyway, so a good approximation is to put D in this correction as proportional to the fourth-root of yield in this correction factor formula. The value of X for any soil material is a constant which may be easily calculated from the published crater sizes for a 1 kt surface burst, where gravity is not of importance (X is the cratered mass divided by the energy used in cratering, the latter being determined by an energy balance for the explosion effects).

The crater is made by two processes: the shock wave pulverisation of the soil (the energy required to do this is approximately proportional to the mass of soil pulverised) and the upward recoil of pulverised soil in reaction (by Newton’s 3rd law) to the downward push of the explosion (the energy required to do this excavation depends on gravitation, since it takes energy MgD to raise mass M a distance D upward against gravity acceleration g).

Russian near surface burst nuclear test cratering data (update of 13 May 2007):

The crater depth is defined as the final pit depth measured not from the top of the crater lip, but from the undisturbed surrounding ground. Likewise the crater radius is defined not as the radius to the top of the lip, but merely as the radius to in the undisturbed ground. For the Australian-British 1.5-kt Buffalo-2 nuclear surface burst in dry soil at Maralinga in 1956, the crater lip height was 0.2D where D was crater depth, the radius of the crater lip crest was 1.25R where R was the crater radius, and the radius of the ground rupture zone was 1.4R (these data are taken from U.K. test report AWRE-T37/57, 1957).

The following table contains crater data for three near surface bursts of low yield. These fission weapons, with yields of 0.5-1.5 kilotons, were all of low X-ray emission, which means they produced twice the crater radii and depth that would occur if they had the usual X-ray emission of large warheads (which is about 80%). The lower the X-ray emission, the greater is the energy retained by the bomb casing. The case shock has high density, so it ploughs itself deeply into the ground and efficiently delivers kinetic energy for crater formation (X-rays merely heat up the surface, and any physical push is created by the recoil from surface ablation, which is feeble for crater production, as is the recoil due to the reflection of the air blast wave).


These data when corrected for burst height to a true surface burst and corrected by the cube-root law to 1-kiloton yield (cube-root scaling is valid below 2-kt), suggest that for such low X-ray weapons, the crater size for a 1-kt surface burst on dry soil is R = 18.37 m, D = 9.784 m.

(Noted added 13 May 2007: attention should be given to including Russian nuclear test data for surface bursts - see table already given earlier in this post - in this analysis, to increase accuracy.)

It would be useful to have some exact figure showing how much energy is used to produce the crater in these tests. Careful measurements were made of blast and thermal radiation at surface bursts, and these give approximate figures. The blast wave and thermal radiation energy is reduced significantly in low-yield surface bursts. In the Australian-British nuclear tests at Maralinga in 1956 (Operation Buffalo), the first shot (a 15-kt tower burst which produced an insignificant crater effect) has a measured blast yield of 7.7-kt of TNT equivalent, or 51% of the total yield, but the second shot (a 1.5-kt surface burst which produced a deep crater) had a measured blast yield of 0.46-kt of TNT equivalent, or 31% of the total yield. The difference is smaller for higher yield detonations. Computer simulations of crater formation indicated that in the 0.50-kt 1962 Nevada surface burst, Johnnie Boy, some 30% of the total kinetic energy of the explosion must have been used in crater formation and ground shock, as compared to only 3.75% in megaton surface bursts. For comparison, 67% of the energy of an iron meteor, striking dry soil at 20 m/s and normal incidence (90 degrees), becomes ground shock and crater formation.


In the case of the 9 Mt missile warheads stockpiled in America to destroy Moscow’s bunkers in a nuclear war, in the mid 1980s it was suddenly realised that their cratering radius was only a small fraction of what had previously been believed. The political response by President Reagan officially was to cover this up, keeping news of it from leaking to Moscow, and to press on with arms reduction talks. The Soviet Union collapsed before they were aware of the impotence of American power for destroying the Soviet command centres in a nuclear war! (Soviet evaluation of nuclear test effects was even worse than American efforts! The Soviets could not even work out how to make a camera photograph the EMP on an oscilloscope without the dot saturating the film, which the Americans did by a circuit to keep the dot off-screen until just before detonation. Soviet 1962 ‘measurements’ of EMP thus relied on the distance sparks would jump, the rating of the fuses blown by current surges, and electric fires in power stations! As far as cratering goes, all of the Russian surface bursts were of kiloton-range yield, and not a single one had a megaton yield. At least America had some data for megaton shots on coral. The big Russian tests, up to 50 megatons, were air burst and produced no crater.)

Oleg Penkovskiy, the famed spy, in 1965 betrayed the Russian secret underground command centre in the Ural Mountain range to America, but that is built under tundra. With missile delivery times falling and the chance of a sudden war increasing, the Russians also had a World War II shelter under a location near Kuybishev, and there is a later one at Ramenki, but the leaders would not have time to reach such shelters from Moscow. So they then dug a very deep shelter with tunnels linked under the Kremlin in Moscow. When it was completed in 1982, the project manager (former general secretary Chernenko) was awarded the Lenin Prize! The shelter is 200-300 metres underground with the well protected floors at the lowest levels and accommodates up to 10,000 key personnel. A 9-megaton surface burst causes severe underground destruction at 1.5 crater radii; for the ‘wet soft rock’ geological environment of the Moscow basin, this is 1.5 x 120 = 180 metres. You can see the problem! Even the biggest American warheads, 9-megatons, carried by the tremendous Titan missiles, could not seriously threaten Russian leadership in a war, because the Russian shelters were then simply too deep. Nuclear horror tales are just bunk. The duration and penetrating power of the heat flash and fallout radiation are also media-exaggerated.

Severe damage to missile silos occurs at 1.25 crater radii (rupture); severe damage to underground shelters occurs at 1.5 crater radii (collapse)


The effects from nuclear weapons that are ‘scary’ – in that they cover the widest areas – are all easily mitigated effects, like flying glass (don’t watch the fireball from behind a window), heat flash (again, look away, or better, ‘duck and cover’ under a table or just lie face down facing away to avert burns to exposed face and hands as well as glass fragments; dark clothes take time to ignite and someone lying down can put out any ignition after the flash simply by rolling over), and fallout (intense fission product radiation is due to fast decay, so it doesn’t last long, the mixture decays faster than 1/time, and at 2 days it is on average just 1% of the level at 1 hour; most of it is stopped by brick buildings). As the secret photos of fallout covered trays from the 3.53 megaton 1956 Zuni test at Bikini Atoll show (see Dr Terry Triffet and Philip D. LaRiviere, Characterisation of Fallout, WT-1317, 1961, for long classified ‘Secret – Restricted Data’, but now available), the fallout in a significant danger areas is clearly visible deposit of fused sand and not a mysterious death ray gas, you get hundreds of sand-like grains per square centimetre in lethal fallout areas where cover is necessary, but it is not so heavy that you’ll see the Statue of Liberty half covered by fallout, as in ‘Planet of the Apes’. It is true that a thunderstorm after an air burst can produce rainout, but that just goes down the drain, carrying the tiny air burst particles with it, and drains are deep enough to shield the gamma radiation! Triffet and LaRiviere also point out that a dirty bomb with U-238 in its casing produces a lot of Np-239 and related neutron capture products which predominate over most fission products for a week or two, but emit very easily shielded, low-energy gamma rays. Therefore you don’t need sophisticated shelters to screen most of the radiation. The sand-like fallout doesn’t diffuse like a gas, either. G. G. Stokes found that for a spherical particle of radius r moving at speed v through air of viscosity m , the drag force is F = 6pmrv, which allows the fallout times to be calculated.

The ‘Force of sound’

The sound wave is longitudinal and has pressure variations. Half a cycle is compression (overpressure) and the other half cycle of a sound wave is underpressure (below ambient pressure). When a spherical sound wave goes outward, it exerts outward pressure which pushes on you eardrum to make the noises you hear. Therefore the sound wave has outward force F = PA where P is the sound wave pressure and A is the area it acts on. When you read Raleigh’s textbook on ‘sound physics’ (or whatever dubious title it has), you see the fool fits a wave equation from transverse water waves to longitudinal waves, without noting that he is creating particle-wave duality by using a wave equation to describe the gross behaviour of air molecules (particles). Classical physics thus has even more wrong with it becauss of mathematical fudges than modern physics, but the point I’m making here is that sound has an outward force and an equal and opposite inward force following this. It is this oscillation which allows the sound wave to propagate instead of just dispersing like air blown out of your mouth.

Note the outward force and equal and opposite inward force. This is Newton’s 3rd law. The same happens in explosions, except the outward force is then a short tall spike (due to air piling up against the discontinuity and going supersonic), while the inward force is a longer but lower pressure. A nuclear implosion bomb relies upon Newton’s 3rd law for TNT surrounding a plutonium core to compress the plutonium. The same effect in the Higgs field surrounding outward going quarks produces an inward force which gives gravity, including the compression of the earth's radius (1/3)MG/c2 = 1.5 mm (the contraction term effect in general relativity).

Why not fit a wave equation to the group behaviour of particles (molecules in air) and talk sound waves? Far easier than dealing with the fact that the sound wave has an outward pressure phase followed by an equal under-pressure phase, giving an outward force and equal-and-opposite inward reaction which allows music to propagate. Nobody hears any music, so why should they worry about the physics? Certainly they can't hear any explosions where the outward force has an equal and opposite reaction, too, which in the case of the big bang tells us gravity.



UPDATE: copy of a comment to

http://backreaction.blogspot.com/2009/06/this-and-that.html

Thanks for this post! It always amazes me to see how waves interact. You'd intuitively expect two waves colliding to destroy each other, but instead they add together briefly while they superimpose, then emerge from the interaction as if nothing has happened.

Dr Dave S. Walton tried it with logical signals (TEM - trabsverse electromagnetic) waves carried by a power transmission line like a piece of flex. Logic signals were sent in opposite directions through the same transmission line.

They behaved just like water surface waves. What's interesting is that when they overlapped, there was no electric drift current because there was (during the overlap) no gradient of electric field to cause electrons to drift. As a result, the average resistance decreased! (Resistance only occurs when you are having to do work by accelerating electrons against resistance from collisions with atoms.)

Another example is the reflection of a weak shock wave when it hits a surface. The reflected pressure is double the incident pressure, because the leading edge of the shock wave collides with itself at the instant it begins to reflect, at doubling the pressure like the superposition of two similar waves travelling in opposite directions as they pass through one another. With strong shock waves, you get more than a doubling of pressure because there is significant dynamic or wind pressure in strong shocks (q = 0.5*Rho*u^2 where Rho is density and u is the particle velocity in the shock wave) and this gets stopped by a reflecting surface, and the energy is converted into additional reflected overpressure.

WHAT IS NUKEGATE? The Introduction to "Nuclear Weapons Effects Theory" (1990 unpublished book), as updated 2025

R. G. Shreffler and W. S. Bennett, Tactical nuclear warfare , Los Alamos report LA-4467-MS, originally classified SECRET, p8 (linked HE...