Peace through practical, proved civil defence for credible war deterrence
  • Credible nuclear deterrence of invasions and conventional wars reduce the risk of large nuclear wars occurring through escalation of conventional wars. Contrary to irrational, pseudo-scientific propaganda, the number of nuclear weapons is smaller than the millions of conventional weapons used in large wars and the correct scaling shows that the overall effects are similar, not massively different as often claimed for political propaganda by enemies of peace. Furthermore, the greater time delay of effects from nuclear weapons over the damaged area increases the efficiency of cheap civil defence countermeasures, as compared to conventional weapons. In conclusion, credible nuclear deterrence of conventional war offers a beautiful opportunity to create a peaceful world, free from fear peddling, ranting dictators. The only oppositions you will meet will come from authoritarian obsessed fear peddling myth makers. If they can't tell the truth and face the facts, why listen to them? Please see our post on the need to deter not only direct threats from nuclear attacks but also conventional wars and invasions that can escalate into nuclear wars (as proved by the use of nuclear weapons in WWII, for example, after they were developed during the war itself and did not trigger or provoke the war), linked here, here, here, and here, here, here, and the true scaling law equivalence between a few thousand nuclear weapons and the several million tons of small conventional weapons in a non-nuclear world war as proved by our post summarising key points in Herman Kahn's much-abused call for credible deterrence, On Thermonuclear War, linked here. Peace comes through tested, proved and practical declassified countermeasures against the effects of nuclear weapons, chemical weapons and conventional weapons. Credible deterrence to end invasions and wars comes through simple, effective protection against invasions like low yield tactical weapons and walls, and civil defence against collateral damage. Peace comes through discussions of the facts as opposed to inaccurate, misleading lies of the "disarm or be annihilated" political dogma variety, which are designed to exploit fear to close down criticisms of errors in mainstream orthodoxy. In particular, please see the post linked here on EMP results from an actual Russian 300 kt test at 290 km altitude over unwarned civilian infrastructure in Kazakhstan on 22 October 1962, which caused no injuries or deaths whatsoever (contrary to all of Jeremy Corbyn and CND style lying propaganda that any use of nuclear weapons on civilians would automatically kill millions), but shut down the communications and power supply lines! This is not secret, but does not make newspaper headlines to debunk CND style dogmas on the alleged incredibility of nuclear deterrence.

  • Hiroshima's air raid shelters were unoccupied because Japanese Army officers were having breakfast when B29s were detected far away, says Yoshie Oka, the operator of the Hiroshima air raid sirens on 6 August 1945...

  • In a sample of 1,881 burns cases in Hiroshima, only 17 (or 0.9 percent) were due to ignited clothing and 15 (or 0.7%) were due to the firestorm flames...

  • Dr Harold L. Brode’s new book, Nuclear Weapons in ...

  • 800 war migrants drowned on 22 April by EU policy:...

  • Photographed fireball shielding by cloud cover in ...

  • Nuclear weapons effects "firestorm" and "nuclear w...

  • Proved 97.5% survival in completely demolished houses ...

  • Wednesday, March 29, 2006

    Physical understanding of the blast wave and cratering

    (This post is being revised, corrected and updated as of 8 August 2009. Greek symbols for density, Pi, etc., will just appear as p in some browsers which do not support the character sets. The page displays correctly in Internet Explorer 7.)

    ABOVE: peak overpressures in psi (pounds/sq. inch; 1 psi = 6.9 kilopascals, kPa) with distances scaled by the cube-root of yield to apply to a standard reference total yield of 1 kiloton. All tests shown are surface bursts, 1 kt to 14.8 Mt, which have an effective blast yield of about 1.68 times that in a free air burst (an air burst in sea level air well away from any solid reflective surface). Data from WT-934 (1959), page 29, and have been scaled to 1 atmosphere ambient air pressure and 20 C ambient air temperature.

    A shock wave is caused by the rapid release of either compressed fluid or energy, explosively heating and compressing fluid. A ‘blast wave’ is a shock wave in air, a compressed shock front accompanied by a blast of outward wind pressure. The shock front has an abrupt pressure rise because the air at the shock front is travelling into cold air which reduces its speed, while the hot air inside the shock front moves out faster, catching up with it to converge in the wall of compressed air. Within this the overpressure region or shock front, wind travels outward from the explosion, but within the inner area of low pressure, wind blows in the opposite direction, towards ground zero, allowing the return of air into the partial vacuum in the middle. At any fixed location, the blast first blows outward during the overpressure phase, and then reverses and blows inwards at a lower speed but for a longer duration during the ‘suction’ phase. Overpressure, p, acts in all directions within the shock front and is defined as the excess pressure above the normal atmospheric pressure (which is on average 101 kPa or 14.7 pounds per square inch at sea level). Dynamic pressure, q, acts only in the direction of the outward or reversed blast winds accompanying the shock wave, and is the wind pressure, exactly equivalent to a similar gust of wind with the same velocity and duration.

    The blast wave must engulf, heat and compress all of the air that it encounters as a result of its supersonic spherically divergent expansion. Consequently, its energy is continuously being distributed over a larger mass of air that rapidly reduces the energy available per kilogram of air, so the overpressure drops rapidly. Some energy is lost in surface bursts in forming a crater and melting a thin layer of surface sand by conduction and radiation. Initially the shock front is also losing energy by the emission of thermal radiation to large distances. When the blast wave hits an object, the compressed shock front exerts an all-round crushing-type overpressure, while the outward blast wind contributes a hammer blow that adds to the overpressure, followed by a wind drag to roof materials, vehicles, and standing people. The total force exerted by the blast is equal to pressure multiplied by exposed surface area, but if the object is sufficiently rigid to actually stop and reflect the shock wave, then it collides with itself while being reflected, reducing its duration but increasing its peak pressure. P. H. Hugoniot in 1887 derived the basic equations governing the properties of a gaseous shock wave in a piston, relationships between density, pressure and velocity. Lord Kelvin later introduced the concept ‘impulse’ (the time-integrated pressure of a fluid disturbance), when he was working on vortex atom theory.

    The peak pressure in the air blast wave has 4 contributions: the ambient pressure, the isothermal sphere, the shock front and the sonic wave. These are represented by terms including the factors P0, 1/R3, 1/R2, and 1/R, respectively, where P0 is the ambient (normal) air pressure at the altitude of interest and R is the distance from the explosion. The equation of state for air gives the base equation for the total pressure, P = (g - 1) E/V, where g = 1.4 is the ratio of specific heat capacities of air (at high temperatures it can drop to 1.2 owing to the vibration energy of molecules, while molecular dissociation into atoms increases it towards 1.67, for a monatomic gas; these two offsetting effects keep it at 1.4), E is the total blast energy and V is the blast wave volume. We now discover a generalised summation using dimensional analysis that automatically includes all of the four separate blast wave terms, already discussed

    P= å {[(g- 1)E/V]n/3P01 –(n/3)}, where the summation is for: n = 0, 1, 2, and 3.

    For a free air burst, V = (4/3)p R3, so for g = 1.4, R in km, and blast yield X kilotons:

    P = P0 + (0.00737X1/3P02/3/R) + (0.0543X2/3P01/3/R2) + (0.400X/R3) kilopascals (kPa).

    For high altitude bursts, the air pressure at altitude H km is P0 = 101e-H/ 6.9 kPa. For sea level air, P0 = 101 kPa, so the peak overpressure, p = P – P0, is:

    p = (16.0X1/3/R) + (2.53X2/3/R2) + (0.400X/R3) kPa

    For direct comparison, the peak overpressure graph for American sea level free air bursts (DNA-EM-1, 1981, and The Effects of Nuclear Weapons, 1977, Fig. 3.72) implies:

    p = (3.55W1/3/R) + (2.00W2/3/R2) + (0.387W/R3) kPa,

    where W is the total weapon yield in kilotons. In deriving this formula, we produced fits to both the surface burst and free air burst curves, and averaged them to find an effective yield ratio of 1.68 for surface bursts to free air bursts (due to reflection by the ground in a surface burst, which results in a hemispherical blast with nearly double the energy density of a free air burst, with some energy loss due to surface interaction effects like melting the surface layer of sand into fused fallout particles, ground shock and cratering). This comparison of theory and measurement shows close agreement for the 1/R2 and 1/R3 high overpressure terms, where the exact blast yield fractions are 0.703 and 0.968, respectively. The fraction of the explosion energy in blast is highest at high overpressures where the shock front has not lost much energy by radiation or degradation; but for the weak or sonic blast wave (1/R term) the fraction is only 0.0109 owing to these losses. The American book, The Effects of Nuclear Weapons (1957-77 editions) gives a specific figure of 50% for the sea level blast yield, but this time-independent generalisation is a totally misleading fiction. It is obtained by the editors of the American book by subtracting the final thermal and nuclear radiation yields from 100%, neglecting blast energy that is dissipated with time for crater excavation, fallout particle melting, and the massive cloud formation. Initially, almost all of the internal energy of the fireball goes into the blast wave, but after the thermal radiation pulse, the blast or sonic wave eventually contains only 1.09% of the energy.

    Note that the revised EM-1 manual and its summary by John A. Northrop, Handbook of Nuclear Weapons Effects (DSWA, 1996, p. 9), suggests a formula for free air bursts which has differences to that given above. Northrop's compilation and Charles J. Bridgman's Introduction to the Physics of Nuclear Weapons Effects, DTRA, 2001, p. 285) give for a free air burst:

    p = (0.304W/R3) + (1.13W2/3/R2) + (1.00W1/3/[RA]) kPa,

    where W is the total weapon yield in kilotons, R is in km, and A = {ln[(R/445.52) + 3*exp(-{R/445.52}1/2/3)]}1/2. Bridgman gives a graph of peak overpressures (Fig. 7-6 on p. 285) showing 500 kPa peak overpressure at 100 m, 30 kPa at 400 m, and 8 kPa at 1 km from a 1 kt (total yield) nuclear free air burst. [1 psi = 6.9 kPa.] He also reproduces the curves for dynamic and overpressure positive phase duration, Mach stem height, etc., from chapter 2 of Dolan's DNA-EM-1.

    An alternative, simpler equation summarizing data on free air burst peak overpressures was presented in 1957 by the U.K. Home Office Scientific Advisory Branch physicist Frank H. Pavry in his paper 'Blast from Nuclear Weapons', in U.K. National Archives document HO 228/21 Report of a course given to university physics lecturers at the Civil Defence Staff College 8-11 July 1957:

    P = (2640/R)*(1 + 500/R)2.4 psi,

    where R is distance in feet (notice that 2640 feet is half a statute mile, 5280 feet). The numerical constants in this formula were only approximate in 1957, but it may be possible to update them with modern data.

    The low energy in the blast wave at long ranges is consistent with the fact that the physically accurate cloud rise model in Hillyer G. Norment's report DELFIC: Department of Defense Fallout Prediction System, Volume I - Fundamentals, Atmospheric Sciences Associates, DNA-5159F-1 (1979) finds that for the mushroom cloud expansion observed it is required that 45% of the bomb yield ends up as the hot air in and surrounding the fireball (dumped from the back of the blast wave) which produces the convective mushroom cloud phenomena. This 45% figure is mainly blast wave energy left behind by the blast wave in the air outside the visibly glowing fireball region. If the blast wave energy remained in the shock front indefinitely, then there would be no mushroom cloud phenomena because the vast amount of energy needed wouldn't be available to cause it! That doesn't happen: the blast wave irreversibly heats up the air it engulfs, and continually dumps warmed air from the back of the blast wave which moves back into the near vacuum towards ground zero, causing the reversed wind direction (suction) phase, while the shock front is still moving outwards. The energy of the heated air forming these afterwinds is the main contributor to the mushroom cloud rise energy.

    In a land surface burst, the blast volume for any radius is only half that of a free air burst, because the blast is confined to a hemispherical shape rather than a sphere. Therefore, in the event of an ideal, rigid, reflecting surface, the blast would be identical to that from a free air burst with twice the energy, or 2W. The effective yield of a surface burst on land or water, as determined from 70 accurate measurements at 7 American tests conducted in the Nevada and at Eniwetok and Bikini Atolls from 1951-4 (Sugar, Mike, Bravo, Romeo, Union, Yankee and Nectar), for scaled distances equivalent to 55-300 m from a 1 kiloton burst, is actually only 1.68W. Hence, about 16% of the energy of a surface burst goes into the ground/water shock wave, crater, and in melting fallout or vaporising seawater: if a sea level air burst has an effective blast yield of 50%, a surface burst has a blast yield of only 50*(1.68/2) = 42%.

    Close to detonation, the fireball arrival time is theoretically proportional to (radius, r)5/2, but at great distances blast arrival time is equal to (r/c) – (R/c), where R is the thickness of the blast wave or head start and c is the sound velocity (this incorporates the boost the blast wave gets early on while it is supersonic). Using 1959 weapon test report WT-934 data from Sugar, Mike, and Operation Castle surface burst nuclear tests, with cube-root scaling of both the arrival times and distances [cube-root scaling is as (yield)1/3] to 1-kt, we combine both rules to obtain generalised, universal blast arrival time formula for 1-kt surface bursts:

    t = r / [0.340 + (0.0350/ r3/2) + (0.0622/ r)] seconds,

    where r is in km and the term 0.340 is the speed of sound in km/s. To use this equation for other yields (or for air bursts) it is just necessary to scale both the time and distance down to a 1-kt surface burst blast equivalent using the cube-root scaling law.

    When a nuclear weapon is air burst, the blast wave along the ground is modified by the surface reflection (Nevada desert terrain reflects 68% of the blast energy) in which the reflected blast moves through air already heated by the direct blast, so moving faster and merging with it. The total energy of this merged blast wave will therefore be 1 + 0.68 = 1.68 times that in a free air burst at a similar distance in infinite air. Because the range of any blast pressure is proportional to the cube-root of the energy, this means that in a surface burst the ranges of the merged blast wave (Mach stem) will be (1.68)1/3 = 1.19 times greater than for a free air burst. This increase was observed in ordinary TNT bursts; but in nuclear explosions there are two further factors of importance, first seen at the 1945 Trinity test. First, nuclear bursts emit thermal radiation that heats the surface material, in turn heating the surface air by convection and allowing the blast wave to travel faster along the ground at higher overpressure. Hence, British nuclear test measurements of overpressures made with sensors on the tops of towers gave lower readings than American instruments close to the ground. Second, thermal radiation explodes the silicate sand crystals on a desert surface, like exploding popcorn, creating a very hot cloud of dust about 2-3 m high, called the ‘thermal layer’.

    The blast ‘precursor’ which was filmed around the fireball in the 1945 Trinity nuclear test was caused by thermal radiation pop-corning the desert sand into a cloud of hot gas through which the blast wave then moved faster than through cold air (because hot air adds more energy to the blast than cold air). The density of the dust loading in the precursor increased the fluid (air) inertia, reducing the peak overpressure but increasing wind (dynamic) pressure (which is proportional to density). American measurements on the precursor blast in Nevada tests Met, Priscilla, and Hood allowed development of a mathematical model in 1995 which includes thermal pop-corning (blow-off) of the desert surface, thermal layer growth, blast modification and the prediction of precursor effects on the waveforms of overpressure and dynamic pressure. This model was produced in secret for section 2 of Chapter 2 in Capabilities of Nuclear Weapons, EM-1: ‘Air Blast over Real (non-ideal) Surfaces’.

    When the blast travels through this layer it billows upwards to 30 m in height, and the overpressure is actually reduced to 67% of normal because the mass of dust loading increases the air’s inertia. But the dynamic/drag pressure is increased by several times because it is proportional to the new higher air density (including dust), and dramatically increases ranges of destruction to wind-drag sensitive targets! This occurs in surface bursts of over 30 kt yield and in air bursts within 240W1/3 m of silicate or coral sand, where W is yield in kt; precursors occurred over coral islands in the 14.8 Mt Bravo test of 1954. The maximum ground range to which precursors are observed in bursts over sandy ground is 350W1/3 m. No precursor has been observed over water or ground covered in white smoke. Concrete, ice, snow, wet ground, and cities would generally reflect the thermal flash and not produce a thermal precursor. The precursor is most important at high overpressures where the thermal heating effect is greatest: no precursor or blast pressure change effect occurs below 40 kPa peak overpressure. A precursor will reduce a predicted 70 kPa peak overpressure to 84%, will reduce a predicted 85 kPa to 80%, will reduce 140 kPa to 75%, and will reduce predicted 210-3,500 kPa to 67% (Philip J. Dolan, ‘Capabilities of Nuclear Weapons’, Pentagon, DNA-EM-1, Fig. 2-21, 1981).

    In 1953, interest focussed on the increased drag damage due to vehicles and wind sensitive targets exposed to the precursor from the Grable test. In 1955 it was discovered at the Teapot Nevada tests that the temperature of the precursor dust cloud reached 250 C at 40 milliseconds after the arrival of the blast wave (U.S weapon test report WT-1218). The hot precursor dust burned the skin of animals in an open shelter (protecting against thermal radiation) at 320 m from a 30-kt tower burst (report WT-1179). Japanese working in open tunnel shelters 90 m from ground zero at Nagasaki reported skin burns from the blast wind, but their overhead earth cover shielded out the radiation. At 250 C, skin needs exposure for 0.75 second to produce skin reddening, 1.5 seconds to produce blistering, and 2.3 seconds to cause charring (at 480 C, these exposure times are reduced by a factor of 10).

    Small rain or mist droplets (0.25 cm/hour rainfall rate) and fog droplets are evaporated by the warm blast wave, reducing the peak overpressure and overpressure duration each by about 5 %. This was observed in TNT bomb tests in 1944 (Los Alamos report LA-217). Large droplets in heavy rainfall (at 1.3 cm/hour) are broken up by the blast before evaporating, which causes a 20 % reduction in peak overpressure. This was observed when heavy rainfall occurred over part of Bikini Atoll during the 110 kt Koon nuclear test in 1954; comparison of peak overpressures on each side of ground zero indicated a 20% reduction due to localised heavy rain (report WT-905).

    Dr William Penney who measured blasts from the early American nuclear tests and was test director during the many Australian-British tests at Monte Bello, Emu Field and Maralinga, in 1970 published the results (Phil. Trans. Roy. Soc. London, v. 266A, pp. 358-424): ‘nuclear explosives cause the air near the ground to be warmed by heating through the heat flash.’ This has two important implications that are ignored by the American publications on blast. First, since the heat flash scales more rapidly than the cube root of yield (which is used for blast), the thermal enhancement increases out of step (so test data from 30-kt bursts show more thermal enhancement than 1-kt tests). Second, Penney had blast gauges both at ground level and on poles 3-m above the ground in Maralinga, where there was red desert soil that readily absorbed the heat flash. The peak overpressures at ground level are significantly higher than at 3-m height. The average pressure, causing the force loading and the damage, to a 10-m high building is therefore less than that measured at ground level.

    At 408 m a 1-kt burst at 250-m altitude, Penney points out that his scaled data for a marked thermal layer effect (red desert soil) gives 58-kPa, whereas the American government manual gave 77-kPa for ‘nearly ideal’ conditions, an increase of over 30%. Penney’s data for no thermal effect gave 71-kPa, indicating that the American test data had been scaled down from a higher yield than the British test, where thermal heating was greater. Ignoring thermal flash absorption for the short ranges of interest, the thermal energy ranges scale in proportion to W/r2 where W is yield and r is distance, while the blast ranges are scaled by W1/3, so the thermal energy received at any given scaled blast range varies as W/(W1/3)2 = W1/3. Therefore, when serious thermal heating occurs, the peak overpressures scale up with yield in addition to distances. There is little effect in a surface burst (unless the fireball is very large) because thermal is then emitted parallel to the ground and is not absorbed by the ground, and American high yield tests occurred over transparent water which did not heat up at the surface. A 10-Mt air burst detonation over dark coloured ground would deposit 10 times as much thermal energy on the ground at the scaled blast ranges measured for 10-kt tests in America and Australia, so there would be much greater thermal enhancement of the blast ranges.

    In addition to this fact about blast data analysis from nuclear tests, there is another point made by Penney. The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’

    A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.

    Mathematical representation of ideal pressure-time curves

    In general, Dr Brode’s empirical and semi-empirical formulae are extremely useful, but there are problems when it comes to the pressure-time form factors. Brode uses the sum of three exponential terms to represent the family of pressure-time curves in the positive (compression) phase for a location receiving any particular peak overpressure. The issue we have with Brode is that the analytically correct physical theory gives a much simpler formula, and this illustrates the issue between the use of computers and the use of physical understanding. The time-form graphs given by Brode in his 1968 article do not correlate with the formulae he provides or with Glasstone and Dolan 1977, although they do correlate with Glasstone 1962/4.

    The general form of Brode’s formula is like Pt = Pmax (1 – t/Dp+)(xe-at + ye-bt + ze-ct). The decay constants like a, b and c are themselves functions of the peak overpressure. It is thus very complex. Pt is the time-varying overpressure, Pmax is the peak overpressure, t is time measured from blast arrival time (not from detonation time!), and Dp+ is the positive phase overpressure duration.

    Now consider the actual physics. The time decay of overpressure at a fixed location while the blast wave passes by in a shock-tube (a long, uniform, air filled cylinder) where the blast is unable to diverge sideways as it propagates, is Pt = Pmax (1 – t/Dp+)e-at. In a real air burst, however, the pressure decays additionally by divergence with time as air has another dimension in which to fall off (sideways). This dimension is the transverse dimension, i.e., circumference C, which is proportional to the radius r of the blast by the simple formula C = 2pr. In other words, as the blast sphere gets bigger, the pressure falls off everywhere because there is a greater volume for the air to fill. We are interested in times not radii or circumference, but blast radius is approximately proportional to the time after detonation. Hence, we can adapt the shock-tube blast decay formula for the additional fall caused by sideways divergence of the expanding blast by dividing it by a normalised function of time and pressure (unity is added in the denominator because t is time after blast arrival, not time after explosion):

    Pt = Pmax [(1 – t/Dp+)e-at] / [1 + 1.6(Pmax /Po).(t/ Dp+)]

    This formula appears to model the pressure-time curves accurately for all peak overpressures (a ~ 0 if just considering the positive or compression phase, Po is ambient pressure). The fall of the wind pressure (dynamic pressure), q, is related to this decay rate of overpressure by standard relationships discussed by Glasstone and Dolan for the case where g= 1.4: qt = q (Pt /Pmax)2[(Pmax + 7Po)/ (Pt + 7Po)].

    Cratering problems

    From an earlier post:

    ‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies ... coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance ... Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.

    The latest crater scaling laws are described in the report:
    R. M. Schmidt, K. R. Housen and K.A. Holsapple, Gravity Effects in Cratering, DNA-TR-86-182, Defense Nuclear Agency, Washington D.C., 1988.

    In the range of 1 kt – 10 Mt there is a transition from cube-root to fourth-root scaling, and the average scaling law suggested by Nevada soil and Pacific coral Atoll data, W0.3 (used by Glasstone and Dolan) was shown to be wrong in 1987 because empirical data was too limited (the biggest Nevada cratering test was Sedan, 104 kt) and the W0.3 empirical law ignored energy conservation at high yields, where gravity effects kick in and curtail the sizes predicted by hydrodynamic cratering physics.

    The W0.3 scaling law used in Glasstone and Dolan 1977 is false because it violates the conservation of energy, used by the explosion in ejecting massive amounts of debris from the crater against gravity. The yield-dependent scaling for crater dimensions (radius and depth) transitions from the cube-root of yield scaling at low yields (below 1 kt) to fourth-root at high yields, because of gravity. At low yields, the fraction of the bomb energy used to physically dump ejecta out of the crater against gravity (to produce the surrounding lip and debris) is trivial compared to the hydrodynamic energy being used used to physically break up the soil. But at higher yields, the fact that the crater is deep means that a significant amount of bomb energy must now be employed to do work excavating earth against gravity.

    Consider the energy utilisation in cratering. The total energy done by cratering is the sum of the hydrodynamic energy and gravitational work energy. The hydrodynamic term is shown to be proportional to the cube of the crater radius or depth, as shown by the reliability of cube-root scaling at subkiloton yields: the energy needed to hydrodynamically excavate a unit volume of soil by hydrodynamic cratering action is a constant, so the energy required for hydrodynamic pulverization of crater mass m is E = mX where X is the number of Joules needed in cratering for the hydrodynamic excavation of 1 kg of soil.

    But where the crater is deep in bigger explosions, the gravitational work energy E = mgh needed to eject crater mass m the vertical distance h upwards out of the hole to the lip, against gravitational acceleration g (9.8 ms-2)becomes larger than the hydrodynamic energy needed to merely break up the matter, so the gravity work effect then governs the crater scaling law. The total energy used in crater formation is the sum of two terms, hydrodynamic and gravitational: E = (mX) + (mgh).

    The (mX)-term is proportional to the cube of the crater depth (because m is the product of volume and density, and volume is proportional to depth-cubed if the crater radius/depth ratio is constant), while the (mgh)-term is proportional to the fourth-power of the crater depth because m is proportional to the density times the depth cubed (if the depth/radius ratio is constant) and h is always directly proportional to the crater depth (h is roughly half the crater depth), so the product mgh is proportional to the product of depth cubed and depth, i.e., to the fourth-power of crater depth. So for bigger craters and bigger bomb yields, a larger fraction of the total cratering energy then gets used to overcome gravity, causing the gravity term to predominate and the crater size to scale at most by W1/4 at high yields. This makes the crater size scaling law transition from cube-root (W1/3) at low yields to fourth-root (W1/4) at higher yields!

    It’s fascinating that, despite the best scientific brains working on nuclear weapons effects for many decades - the Manhattan Project focussed a large amount of effort on the problem, and utilised the top physicists who had developed quantum mechanics and nuclear physics, and people like Bethe were still writing secret papers on fireball effects into the 1960s - such fundamental physical effects were simply ignored for decades. This was due to the restricted number of people working on the problem due to secrecy, and maybe some kind of ‘groupthink’ (psychological peer-pressure): not to upset colleagues by ‘rocking the boat’ with too much freethinking, radical questions, innovative ideas.

    The equation E = mgh isn't a speculative theory requiring nuclear tests to confirm it, it's a basic physical fact that can be experimentally proved in any physics laboratory: you can easily measure the energy needed to raise a mass (the amount of electric energy supplied to an electric motor while it winches up a standard 1 kg mass is a simple example of the kind of physical fact involved). In trying to analyse the effects of nuclear weapons, false approximations were sometimes used, which then became imbedded as a doctrine or faith about the ‘correct’ way to approach or analyze a particular problem. People, when questioned about a fundamental belief in such analysis, then are tempted respond dogmatically by simply referring to what the ‘consensus’ is, as if accepted dogmatic religious-style authority is somehow a substitute science, which is of course the unceasing need to keep asking probing questions, checking factual details for errors, omissions and misunderstandings, and forever searching for a deeper understanding of nature.

    For example, in the case of a 10 Mt surface burst on dry soil, the 1957, 1962, and 1964 editions of Glasstone's Effects of Nuclear Weapons predicted a crater radius of 414 metres (the 10 Mt Mike test in 1952 had a radius of over twice that size, but that was due to the water-saturated porous coral of the island and surrounding reef, which is crushed very easily by the shock wave at high overpressures). This was reduced to 295 metres in Glasstone and Dolan, 1977, when the scaling law was changed from the cube-root to the 0.3 power of yield. The 1981 revision of Dolan's DNA-EM-1 brings it down to 145 metres, because of the tiny amount of energy which goes into the bomb case shock for a modern, efficient 10 Mt class thermonuclear warhead (Brode and Bjork discovered this bomb design effect on cratering in 1960; high-yield efficient weapons release over 80% of their yield as X-rays which are inefficient at cratering because they just cause ablation of the soil below the bomb, creating a shock wave and some compression, but far less cratering action than the dense bomb case shock wave produces in soil). Then in 1987, the introduction of gravity effects reduced the crater radius for a 10 Mt surface burst on dry soil to just 92 metres, only 22% of the figure believed up to 1964!

    ‘It is shown that the primary cause of cratering for such an explosion is not “airslap,” as previously suggested, but rather the direct action of the energetic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. [Hence the immense crater size from the 10 Mt liquid-deuterium Mike test in 1952 with its massive 82 ton steel casing shock is irrelevant to compact modern warheads which have lighter casings and are more efficient and produce smaller case shocks and thus smaller craters.]’ - H. L. Brode and R. L. Bjork, Cratering from a Megaton Surface Burst, RAND Corp., RM-2600, 1960.


    ‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies… coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance… Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.

    As L.G. Margolin states (above), improved understanding of crater data from the 1952-8 nuclear tests at Bikini and Eniwetok Atolls led to a reduction of predicted crater sizes from land bursts. The massive crater, 950 m in radius and 50 m under water (53 m deep as measured from the original bomb position), created by the 10.4 Mt Mike shot at Eniwetok in 1952, occurred in the wet coral reef surrounding an island because fragile water-saturated coral is pulverised to sand by shock wave pressure. Revised editions of the U.S. Department of Defence books The Effects of Nuclear Weapons and the classified manual Capabilities of Nuclear Weapons (secret) diminished the crater radius for a surface burst on dry soil:


    In the 1957-64 editions, the crater radius was scaled by the well-proved TNT cratering ‘cube-root law’, W1/3, (which is now known to be valid where the work done by excavating against gravity is trivial in comparison to the work done in breaking up material). In the 1977 edition, the crater radius was scaled by less than the cube-root law, in fact the 0.3 power of yield, W0.3, in an effort to fit the American nuclear test data. Unfortunately, as shown in the following table, the American nuclear test data is too patchy for proper extrapolation to be made for dry soil surface bursts, because the one high yield (104-kt Sedan) Nevada explosive-type crater burst was buried at a depth of 194 m. This changes two sensitive variables at the same time, preventing reliable extrapolation.

    *These bombs were at the bottom of the water tank, with 3 m of water above and around to increase the case-shock effect by X-ray absorption in water.

    **650 kg device mass. The Cactus crater was in 1979 used to inter (under a concrete dome) some 84,100 m3 of contaminated topsoil and World War II munitions debris on Eniwetok Atoll in the American clean-up and decontamination work. The initial average height of the lip of this crater was 3.35 m.


    During World War II, experiments showed that W kt of TNT detonated on dry soil produces a crater with a radius of 30W1/3 m. The radius of W kt of TNT is 5.4W1/3 m, or 18% of the dry soil crater radius. The crater is almost entirely due to the ‘case shock’ of a nuclear weapon, not the X-ray emission. This was discovered in the experiments with Koa and Seminole in water tanks to increase X-ray coupling to the ground (see table above). Nuclear weapons with yields below 2-kt (high mass to yield ratio, and low X-ray energy emission) which are surface burst produce craters similar to those from 23% of the TNT equivalent, while high-yield nuclear weapons (low mass to yield ratio, and high X-ray energy emission) which are surface burst produce craters similar to those from 2.9% of the TNT equivalent.

    *These sizes apply to low yield-to-mass ratio nuclear warheads that incur a low X-ray energy emission. These produce the greatest craters, because most of the energy is initially in the case-shock of the bomb, rather than in X-rays (see below). These radii should be corrected for X-ray emission and total yield by the multiplying factor which can reasonably be taken to be 1.41(fW )1/3 (1 + 1.82W1/4 )-1/3, see below for derivation including gravitational effect at high yields. This factor is 1 for case-shock energy fraction f and total yield W kilotons both equal to 1. For pure fission warheads, f = 1. For a 1-megaton modern thermonuclear warhead, f = 1/8 because of the lower case-shock energy and higher proportional of energy in X-rays.

    **These sizes apply to a different mechanism of cratering; namely the crushing of porous coral by the shock wave, so simple ‘cube-root’ scaling applies here.

    About 72% of the energy entering the ground from a TNT explosion is used in cratering, while 28% is used in producing ground shock. The main ground shock from a surface burst nuclear explosion is derived from 7.5% of the total X-ray emission, which is absorbed by the ground within a radius of 3W1/3 m. The downward recoil of the ground in response to the explosive ablation of surface sand initiates a ground shock wave within a microsecond. The case shock of a nuclear weapon delivers 50% of its energy downward, which is all absorbed by the ground on account of its high density, and this is the principal crater mechanism. As debris is ejected from the crater in a cone shape, it absorbs some of the thermal radiation from the fireball within, and is melted, later becoming contaminated and being deposited as fallout. When nuclear weapons are detonated underground, the true TNT equivalent for a similar crater is 30% of the nuclear yield, because the X-rays cannot escape into the air, although a lot of energy is then wasted in melting and heating soil underground.

    The long delay in nuclear effects people understanding crater scaling laws properly has an interesting history. Although Galileo identified craters on the moon using his telescope in 1609, it was only when a couple of astronauts from Apollo 14 visited an allegedly ‘volcanic lava crater’ (crater Fra Mauro) on the moon that they discovered the ejecta from a shallow explosion crater, without any volcanic lava. The idea of explosive cratering had been falsely discounted because physicists had observed very few craters on the earth and many on the moon. They had falsely assumed that the reason for this was strong volcanism on the moon, when it is really due to impact craters having been mostly eroded by geological processes on earth, and mostly preserved on the moon!

    Early theoretical studies of crater formation, even using powerful computer simulations, employed explosion dynamics that ignored gravitation. Almost all of the books on the ‘effects of nuclear weapons’ in the public domain give nonsense for megaton surface bursts. It was only in 1986 that a full study of the effects of gravity in reducing crater sizes in the megaton range was performed: R. M. Schmidt, K. A. Holsapple, and K. R. Housen, ‘Gravity effects in cratering’, U.S. Department of Defense, Defense Nuclear Agency, report DNA-TR-86-182. In addition to secrecy issues on the details, the complexity of the unclassified portions of the new scaling procedures in this official treatment cover up the mechanisms, so here is a simple analytical explanation which is clearer:

    If the energy used in cratering is E, the cratered mass M, and the explosive energy needed to physically break up a unit mass of the soil under consideration is X, then the old equation E = MX (which implies that crater volume is directly proportional to bomb yield and hence crater depth and diameter scale as the cube-root of yield) is completely false, as it omits gravitational work energy needed to shift soil from the crater to the surrounding ground.

    This gravitational work energy is easy to estimate as ½ MgD, where M is the mass excavated, g is gravitational acceleration (9.8 m/s2 ), D is crater depth, and ½ is a rough approximation of the average proportionof the crater depth which displaced soil is vertically moved against gravity in forming the crater.

    Hence the correct cratering energy not E = MX but rather E = MX + ½MgD. For yields well below 1-kt, the second term (on the right hand side) of this expression, ½ MgD, is insignificant compared to MX, so the volume excavated scales directly with yield, and since the volume is proportional to the cube of the average linear dimension, this means that the radius and depth both scale with the cube-root of yield for low yields.

    But for very large yields, the second term, ½MgD, becomes more important, and this use of energy to overcome gravity in excavation limits the energy available for explosive digging, so the linear dimensions then scale as only the fourth-root (or quarter-power) of yield. Surface burst craters are paraboloid in shape, so they have a volume of: p R2 D/2 = (p /2)(R/D)2 D3, where the ratio of R/D is about 1.88 for a surface burst on dry soil. The mass of crater material is this volume multiplied by the density, r , of the soil material: M = rp(R/D)2 D3 /2.

    Hence, the total cratering energy is: E = MX + ½ MgD = r (p /2)R2 D(X + ½gD).

    The density of hard rock, soft rock and hard soil (for example granite, sandstone or basalt) is typically 2.65 kg/litre (2,650 kg per cubic metre), wet soil is around 2.10 kg/litre, water saturated coral reef is 2.02 kg/litre, typical dry soil is 1.70 kg/litre, Nevada desert is 1.60 kg/litre, lunar soil is 1.50 kg/litre (for analysis of the craters on the moon, where gravity is 6 times smaller than at the earth’s surface), and ice is 0.93 kg/litre.

    The change over from cube-root to quarter-root scaling with increasing yield means that old crater size estimates (for example, those in the well-known 1977 book by Glasstone and Dolan, U.S. Department of Defence, 1977, The Effects of Nuclear Weapons) are far too big in the megaton range, and need to be multiplied by a correction factor.

    The correction factor is easy to find. The purely explosive cratering energy efficiency, f, falls as gravity takes more energy, and is simply f = MX/(MX + ½MgD) = (1 + ½gD/X)-1.

    Because gravity effects are small in the low and sub kiloton range, the correct crater radius for small explosions indeed scales hydrodynamically, as R ~ E1/3, so the 1-kt crater sizes in Glasstone and Dolan should be scaled by the correct factor R ~ W1/3(1 + ½ gD/X)-1/3 instead of by the empirical factor of R ~ W0.3 given by Glasstone and Dolan for Nevada explosion data of 1-100 kt. Glasstone and Dolan overestimates crater sizes by a large factor for megaton yield bursts. (The Americans had been mislead by data from coral craters, since coral is porous and is simply crushed to sand by the shock wave, instead of being excavated explosively like other media.

    In megaton surface bursts on wet soft rock, the depth D increases only as W1/4, the ‘fourth root’ or ‘one-quarter power’ of yield scaling. Obviously for small craters, D scales as the cube-root of yield, but the correction factor (1 + ½ gD/X)-1/3 is only significant for the megaton range anyway, so a good approximation is to put D in this correction as proportional to the fourth-root of yield in this correction factor formula. The value of X for any soil material is a constant which may be easily calculated from the published crater sizes for a 1 kt surface burst, where gravity is not of importance (X is the cratered mass divided by the energy used in cratering, the latter being determined by an energy balance for the explosion effects).

    The crater is made by two processes: the shock wave pulverisation of the soil (the energy required to do this is approximately proportional to the mass of soil pulverised) and the upward recoil of pulverised soil in reaction (by Newton’s 3rd law) to the downward push of the explosion (the energy required to do this excavation depends on gravitation, since it takes energy MgD to raise mass M a distance D upward against gravity acceleration g).

    Russian near surface burst nuclear test cratering data (update of 13 May 2007):

    The crater depth is defined as the final pit depth measured not from the top of the crater lip, but from the undisturbed surrounding ground. Likewise the crater radius is defined not as the radius to the top of the lip, but merely as the radius to in the undisturbed ground. For the Australian-British 1.5-kt Buffalo-2 nuclear surface burst in dry soil at Maralinga in 1956, the crater lip height was 0.2D where D was crater depth, the radius of the crater lip crest was 1.25R where R was the crater radius, and the radius of the ground rupture zone was 1.4R (these data are taken from U.K. test report AWRE-T37/57, 1957).

    The following table contains crater data for three near surface bursts of low yield. These fission weapons, with yields of 0.5-1.5 kilotons, were all of low X-ray emission, which means they produced twice the crater radii and depth that would occur if they had the usual X-ray emission of large warheads (which is about 80%). The lower the X-ray emission, the greater is the energy retained by the bomb casing. The case shock has high density, so it ploughs itself deeply into the ground and efficiently delivers kinetic energy for crater formation (X-rays merely heat up the surface, and any physical push is created by the recoil from surface ablation, which is feeble for crater production, as is the recoil due to the reflection of the air blast wave).


    These data when corrected for burst height to a true surface burst and corrected by the cube-root law to 1-kiloton yield (cube-root scaling is valid below 2-kt), suggest that for such low X-ray weapons, the crater size for a 1-kt surface burst on dry soil is R = 18.37 m, D = 9.784 m.

    (Noted added 13 May 2007: attention should be given to including Russian nuclear test data for surface bursts - see table already given earlier in this post - in this analysis, to increase accuracy.)

    It would be useful to have some exact figure showing how much energy is used to produce the crater in these tests. Careful measurements were made of blast and thermal radiation at surface bursts, and these give approximate figures. The blast wave and thermal radiation energy is reduced significantly in low-yield surface bursts. In the Australian-British nuclear tests at Maralinga in 1956 (Operation Buffalo), the first shot (a 15-kt tower burst which produced an insignificant crater effect) has a measured blast yield of 7.7-kt of TNT equivalent, or 51% of the total yield, but the second shot (a 1.5-kt surface burst which produced a deep crater) had a measured blast yield of 0.46-kt of TNT equivalent, or 31% of the total yield. The difference is smaller for higher yield detonations. Computer simulations of crater formation indicated that in the 0.50-kt 1962 Nevada surface burst, Johnnie Boy, some 30% of the total kinetic energy of the explosion must have been used in crater formation and ground shock, as compared to only 3.75% in megaton surface bursts. For comparison, 67% of the energy of an iron meteor, striking dry soil at 20 m/s and normal incidence (90 degrees), becomes ground shock and crater formation.


    In the case of the 9 Mt missile warheads stockpiled in America to destroy Moscow’s bunkers in a nuclear war, in the mid 1980s it was suddenly realised that their cratering radius was only a small fraction of what had previously been believed. The political response by President Reagan officially was to cover this up, keeping news of it from leaking to Moscow, and to press on with arms reduction talks. The Soviet Union collapsed before they were aware of the impotence of American power for destroying the Soviet command centres in a nuclear war! (Soviet evaluation of nuclear test effects was even worse than American efforts! The Soviets could not even work out how to make a camera photograph the EMP on an oscilloscope without the dot saturating the film, which the Americans did by a circuit to keep the dot off-screen until just before detonation. Soviet 1962 ‘measurements’ of EMP thus relied on the distance sparks would jump, the rating of the fuses blown by current surges, and electric fires in power stations! As far as cratering goes, all of the Russian surface bursts were of kiloton-range yield, and not a single one had a megaton yield. At least America had some data for megaton shots on coral. The big Russian tests, up to 50 megatons, were air burst and produced no crater.)

    Oleg Penkovskiy, the famed spy, in 1965 betrayed the Russian secret underground command centre in the Ural Mountain range to America, but that is built under tundra. With missile delivery times falling and the chance of a sudden war increasing, the Russians also had a World War II shelter under a location near Kuybishev, and there is a later one at Ramenki, but the leaders would not have time to reach such shelters from Moscow. So they then dug a very deep shelter with tunnels linked under the Kremlin in Moscow. When it was completed in 1982, the project manager (former general secretary Chernenko) was awarded the Lenin Prize! The shelter is 200-300 metres underground with the well protected floors at the lowest levels and accommodates up to 10,000 key personnel. A 9-megaton surface burst causes severe underground destruction at 1.5 crater radii; for the ‘wet soft rock’ geological environment of the Moscow basin, this is 1.5 x 120 = 180 metres. You can see the problem! Even the biggest American warheads, 9-megatons, carried by the tremendous Titan missiles, could not seriously threaten Russian leadership in a war, because the Russian shelters were then simply too deep. Nuclear horror tales are just bunk. The duration and penetrating power of the heat flash and fallout radiation are also media-exaggerated.

    Severe damage to missile silos occurs at 1.25 crater radii (rupture); severe damage to underground shelters occurs at 1.5 crater radii (collapse)


    The effects from nuclear weapons that are ‘scary’ – in that they cover the widest areas – are all easily mitigated effects, like flying glass (don’t watch the fireball from behind a window), heat flash (again, look away, or better, ‘duck and cover’ under a table or just lie face down facing away to avert burns to exposed face and hands as well as glass fragments; dark clothes take time to ignite and someone lying down can put out any ignition after the flash simply by rolling over), and fallout (intense fission product radiation is due to fast decay, so it doesn’t last long, the mixture decays faster than 1/time, and at 2 days it is on average just 1% of the level at 1 hour; most of it is stopped by brick buildings). As the secret photos of fallout covered trays from the 3.53 megaton 1956 Zuni test at Bikini Atoll show (see Dr Terry Triffet and Philip D. LaRiviere, Characterisation of Fallout, WT-1317, 1961, for long classified ‘Secret – Restricted Data’, but now available), the fallout in a significant danger areas is clearly visible deposit of fused sand and not a mysterious death ray gas, you get hundreds of sand-like grains per square centimetre in lethal fallout areas where cover is necessary, but it is not so heavy that you’ll see the Statue of Liberty half covered by fallout, as in ‘Planet of the Apes’. It is true that a thunderstorm after an air burst can produce rainout, but that just goes down the drain, carrying the tiny air burst particles with it, and drains are deep enough to shield the gamma radiation! Triffet and LaRiviere also point out that a dirty bomb with U-238 in its casing produces a lot of Np-239 and related neutron capture products which predominate over most fission products for a week or two, but emit very easily shielded, low-energy gamma rays. Therefore you don’t need sophisticated shelters to screen most of the radiation. The sand-like fallout doesn’t diffuse like a gas, either. G. G. Stokes found that for a spherical particle of radius r moving at speed v through air of viscosity m , the drag force is F = 6pmrv, which allows the fallout times to be calculated.

    The ‘Force of sound’

    The sound wave is longitudinal and has pressure variations. Half a cycle is compression (overpressure) and the other half cycle of a sound wave is underpressure (below ambient pressure). When a spherical sound wave goes outward, it exerts outward pressure which pushes on you eardrum to make the noises you hear. Therefore the sound wave has outward force F = PA where P is the sound wave pressure and A is the area it acts on. When you read Raleigh’s textbook on ‘sound physics’ (or whatever dubious title it has), you see the fool fits a wave equation from transverse water waves to longitudinal waves, without noting that he is creating particle-wave duality by using a wave equation to describe the gross behaviour of air molecules (particles). Classical physics thus has even more wrong with it becauss of mathematical fudges than modern physics, but the point I’m making here is that sound has an outward force and an equal and opposite inward force following this. It is this oscillation which allows the sound wave to propagate instead of just dispersing like air blown out of your mouth.

    Note the outward force and equal and opposite inward force. This is Newton’s 3rd law. The same happens in explosions, except the outward force is then a short tall spike (due to air piling up against the discontinuity and going supersonic), while the inward force is a longer but lower pressure. A nuclear implosion bomb relies upon Newton’s 3rd law for TNT surrounding a plutonium core to compress the plutonium. The same effect in the Higgs field surrounding outward going quarks produces an inward force which gives gravity, including the compression of the earth's radius (1/3)MG/c2 = 1.5 mm (the contraction term effect in general relativity).

    Why not fit a wave equation to the group behaviour of particles (molecules in air) and talk sound waves? Far easier than dealing with the fact that the sound wave has an outward pressure phase followed by an equal under-pressure phase, giving an outward force and equal-and-opposite inward reaction which allows music to propagate. Nobody hears any music, so why should they worry about the physics? Certainly they can't hear any explosions where the outward force has an equal and opposite reaction, too, which in the case of the big bang tells us gravity.



    UPDATE: copy of a comment to

    http://backreaction.blogspot.com/2009/06/this-and-that.html

    Thanks for this post! It always amazes me to see how waves interact. You'd intuitively expect two waves colliding to destroy each other, but instead they add together briefly while they superimpose, then emerge from the interaction as if nothing has happened.

    Dr Dave S. Walton tried it with logical signals (TEM - trabsverse electromagnetic) waves carried by a power transmission line like a piece of flex. Logic signals were sent in opposite directions through the same transmission line.

    They behaved just like water surface waves. What's interesting is that when they overlapped, there was no electric drift current because there was (during the overlap) no gradient of electric field to cause electrons to drift. As a result, the average resistance decreased! (Resistance only occurs when you are having to do work by accelerating electrons against resistance from collisions with atoms.)

    Another example is the reflection of a weak shock wave when it hits a surface. The reflected pressure is double the incident pressure, because the leading edge of the shock wave collides with itself at the instant it begins to reflect, at doubling the pressure like the superposition of two similar waves travelling in opposite directions as they pass through one another. With strong shock waves, you get more than a doubling of pressure because there is significant dynamic or wind pressure in strong shocks (q = 0.5*Rho*u^2 where Rho is density and u is the particle velocity in the shock wave) and this gets stopped by a reflecting surface, and the energy is converted into additional reflected overpressure.

    33 Comments:

    At 5:06 pm, Anonymous Anonymous said...

    http://en.wikipedia.org/wiki/Talk:Effects_of_nuclear_explosions

    Dr William G. Penney used "kT" in his article on the nuclear explosive yields at Hiroshima and Nagasaki, Proc. Roy. Soc. London, 1970. Penney's paper is cited in Glasstone & Dolan (ENW 1977), although they only use it for the source of the yields of Hiroshima and Nagasaki. Penney had issues with the 1962/4 edition of Glasstone, and these are ignored. The British manual "Nuclear Weapons" (H.M. Stationery Office, 1974) uses "KT", but most sources use "kt".
    Incidentally, Penney reproduces British nuclear test data and disputes the blast wave height-of-burst curves. Penney found that the 'peaking' effect in the Mach region for air bursts is due to the heating of the air just above the ground by the heat flash, and almost disappears if you measure the blast with sensors on poles 3 m high. Penney also discredits Glasstone's dismissal of blast damage in reducing the blast pressure. Accurate data on the crushing of empty petrol cans at Hiroshima by the blast showed that the overpressure decreased due to damage done to wooden houses. (You can't cause mass destruction without using up a lot of energy, which causes an irreversible loss of blast pressure with distance.) In a megaton detonation over a brick or concrete built city the loss of energy would reduce pressure ranges dramatically as the blast diverges outwards. All the American data comes from tests in unobstructed deserts or Pacific atolls.
    I discussed this by email with Dr Hal Brode, who did the original RAND Corp computer calculations of blast waves. His first response was the standard idea that the blast doesn't necessarily lose energy by doing work (causing destruction), since the debris will pick up some of the energy and carry it outward as flying bricks, panels and glass. However it is clear that the blast loses energy by the work done in breaking walls, which is irreversibly lost in warming up the rubble. If each house destroyed takes 1 % of the blast energy, then the energy after destroying 200 houses on a radial line outward from the explosion is down to just 100(0.99^200) = 13 % of what it would be over desert. This is valid for wood-frame houses. Brick and concrete buildings absorb far more energy per building destroyed, so in a modern city the blast pressure would fall very rapidly indeed. This is non-scalable, so it is most pronounced at high yields with large destruction radii computed for open terrain. Brode did concede, when presented with Penney's data, that this effect is not taken into account in American blast calculations at present. See http://glasstone.blogspot.com for further data. - Nigel Cook (edit by User:217.137.87.10)


    The blast energy which diffracts back in is the incident blast energy minus the energy lost in causing destruction. The blast wave is always diverging, which is one of the reasons for the fall in overpressure with distance. Any sideways (non-radial) flow of energy to fill in areas where houses have been destroyed, reduces the energy somewhere else. You can't get something for nothing. If you have read the declassified book which is 1317 pages long, "Capabilities of Nuclear Weapons" by Philip J. Dolan of SRI, report DNA-EM-1 (Defence Nuclear Agency's Effects Manual number 1), you will see that this applies to forests. The blast diffracts around the tree trunks and fills in again afterwards. This was observed in forest stands at various tests, where the blast overpressure was measured on each side and found to be similar.

    The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’

    A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after an average of 200 houses in any radial line from ground zero outwards are destroyed, 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.

    -Nigel Cook 26 Dec 05


    The easiest way to deal with it is by energy use by the blast. The work energy used in pushing a wall distance x with force F is E = xF. Blast waves do diffract, but this doesn't violate conservation of energy. The problem with Glasstone and Dolan 1957-77 is that the book tries to dismiss the differences between a concrete city and a desert, without evidence. It is a cut down version of DNA-EM-1 which does contain sources. When you recognise that it was only in 1986 that they realised that gravity limits crater sizes [1] in the megaton range to 1/4 power scaling (instead of 0.3 power scaling), you get an idea of the bureaucracy of the U.S. Government nuclear effects calculation business. The secrecy prevents a wide range of critical assessment, so fundamental new ideas are ignored, and errors can persist for decades. 172.189.174.108 14:49, 13 January 2006 (UTC)

    "The energy loss per square metre of diverging blast front is small for each building, 1% loss for destroying a wood frame house. So the blast reduction is only important for cities, not for isolated buildings on a desert.The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’

    "A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99^200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am." - http://glasstone.blogspot.com172.201.72.197 13:34, 30 January 2006 (UTC)

     
    At 12:49 pm, Blogger nige said...

    Simpler discussion of the theoretical basis for the E^0.25 scaling law for crater dimensions at large yields:

    The standard unclassified work on the effects of nuclear weapons is Glasstone and Dolan, U.S. Dept. of Defence, 1977. That book states that crater radii for nuclear tests of bombs burst on ground level in the same type of soil, say Nevada sand, are proportional to E^0.3 where E is the energy release in the explosion.

    The 0.3 is an empirical factor, not based on theory. Unfortunately, it's wrong, as was discovered and published in a semi-secret paper in 1987 by the U.S. Department of Defense (the Glasstone and Dolan book was never published). It turns out that all the data used for the E^0.3 scaling law comes from Nevada tests of 1-100 kilotons, and has a fair amount of scatter.

    Physical theory shows that for big yields, enormous amounts of soil are lofted from inside the crater up to the rim and ejecta on the surrounding terrain, and the energy required to lift the stuff is E = mgh, where m is mass, g is acceleration due to gravity and h is the average height the material is raised (about half the depth of the crater). The crater mass m equals the soil density times the crater size, which is proportional to the cube of the crater radius in surface bursts. Since the depth to radius ratio is approximately constant, the crater height h is proportional to radius, so the energy used in cratering, E = mgh = (aR^3)g(bR) = abgR^4, where a and b are constants. This tells you that the crater radius for big craters (where work done against gravity is the overriding use of energy in cratering), E is proportional to R^4, so crater radius R is proportional to E^(1/4) or E^0.25.

    So it turns out that theory shows that at large yields, crater sizes are proportional to to E^0.25, not E^0.3.

    The theory is predictive, because if you know the fraction of bomb energy absorbed in the ground, you can work predict the crater size accurately from the physical theory: you know how much energy is used to eject mass from the ground and that, together with the density of sand, the crater shape and the acceleration due to gravity, enable you to predict theoretically the crater size. (The fraction of energy used in cratering is deduced by the fact in a surface burst the effective blast energy yield of the bomb is found to be 1.6 times that of a free air burst of the same total energy release, rather than twice those of a free air burst as you'd expect if the ground was a perfect reflector with the pressures from the downward shock hemisphere being reflected up and merging to form a single powerful blast hemisphere in the air; the lost energy is that which digs the hole in the ground and causes ground shock. Ideally, you should also include an analysis of how much thermal energy is converted into cratering, by subtracting the thermal yield of a surface burst, typically 15-20%, from the thermal yield of a free air burst, 35-40%, and allowing for the proportion of the thermal energy used to melt soil into spherical fallout particles of fused silica or whatever. Dr Carl F. Miller calculated in his 1963 Stanford Research Institute report, “Fallout and Radiological Countermeasures” volume 1, that the portion of bomb energy used to fuse sand into glassy fallout spheres in a Nevada surface burst ranges from 7.5% for a 1 kt bomb to 9.2% for a 100 Mt bomb.) You can then check the theoretical predictions against the 1-100 kt Nevada craters from 1950s nuclear tests.

     
    At 6:02 pm, Blogger nige said...

    Just a small warning: some of the material and formulae in this post may contain errors, since it was taken from a draft journal manuscript and I don't know whether the units were consistently converted from pressure in psi to kPa and from feet to metres, calories/kt to J/kt or whatever.

    Readers should check formulae for typing errors in any case, for instance by comparing to the blast pressure curves from nuclear weapon tests.

    I will produce a revised blog post, or possibly a page uploaded to the domain http://quantumfieldtheory.org, to quantitatively analyse all nuclear effects data. (This blogger system is terrible to use for equations since you need to type the mark-ups for superscript and Greek symbols manually using html.)

    In the meantime, two updates of vital historical importance:

    (1) Quotation from:

    Harold L. Brode and R. L. Bjork, "Cratering from a megaton surface burst", RAND Corporation, Santa Monica, California, report RM-2600, 1960:

    "Calculations on the cratering and ground motion in a rock medium due to a two-megaton surface burst. The theoretical approach assumes a two-dimensional hydrodynamic model, and it is used to determine the motions involved in the cratering from a large-yield surface burst. Thetechnique is found to work well and to check with experimental observations. It is shown that the primary cause of cratering for such an explosion is not "airslap," as previously suggested, but rather the direct action of the energitic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. The cratering action and ground shock from large-yield explosions is of primary importance to problems of hardening military installations as well as to the peaceful use of nuclear explosions."

    (2) Harold L. Brode's excellent 53 pages long paper, "Fireball Phenomenology" (RAND Corporation, paper P-3026, 1964) is now available to downoad freely from RAND Corporation as a 1.2 MB PDF document:

    http://www.rand.org/pubs/papers/2006/P3026.pdf

    Some of the charts from this report were included in Dr Brode's article, "Review of Nuclear Weapons Effects", published in the 1968 Annual Review of Nuclear Science, volume 18, pages 153-202.

    However, this report includes more detail specifically on fireball scaling laws derived from detailed numerical simulations of fireballs at various altitudes and for yields of 1.7 kt to 4 Mt. It also provides extra charts and illustrations.

    More detailed data on blast wave pressure decay rates and related details for free air bursts are available in the report

    http://www.rand.org/pubs/research_memoranda/2005/RM1363.pdf

     
    At 7:25 pm, Anonymous Anonymous said...

    Hello Nige,

    How did you get the 1-5% energy absorption figure for each house being destroyed. Did you use data provided by Penney or were the calculations made by you.

    Your blog is pure quality, thank you for creating it.

    Arvinder

     
    At 9:11 pm, Blogger nige said...

    Hello Arvinder,

    Thanks, however this blog has many limitations and has been put together too quickly in odd spare moments. I'm going to try to build something much better when time permits, systematically going through all the effects of nuclear weapons, reviewing the details and compiling the best information. I've got a large amount of information beyond what is on this blog (which is mainly concerned with the more "controversial" - actually factually-proved-but-politically-inexpedient - aspects of the many problems).

    The 1-5% figures is the range I computed from detailed analysis of the effects on houses, and which is substantiated by Penney's research.

    For typical Japanese wood-frame houses, which were the predominant building type in Hiroshima and Nagasaki prior to the nuclear attacks, the fall in overpressure is about 1% per house on an radial line. Since the distribution of the houses is known from aerial photographs taken by the 509ths prior to the attacks, the data in Penney's report which gives the accurately measured blast overpressure at various distances from the distortion of overpressure-sensitive targets like petrol cans, blueprint containers, etc., can be compared to peak overpressure for ideal blast waves over unobstructed desert terrain or desert, from nuclear tests.

    The percentage of the blast energy absorbed per house encountered on any radial line from the bomb is also computable using the structural displacement due to the blast wave. Glasstone and Dolan provide a simple way of analysing the net pressure acting on a building as the blast wave diffracts around it.

    Basically, the overpressure only produces a net force on the building as a while during the time taken for the shock front to travel the length of the building. Since the shock front is moving at supersonic velocity, this "diffraction loading" force acts for typically 0.1 second for a building 75 feet long. After that time, the overpressure equalises on all sides, and the building is simply crushed rather than pushed over.

    Another effect is the wind drag loading, which continues for the entire duration of the positive phase of the dynamic pressure. This is of course very important for long-duration blast waves, or when the air is filled with hot dust (giving a sandstorm effect) as occurs if there is a precursored blast wave.

    By calculating the overall force loading and the response of a building to that loading, the energy absorbed by the building from the blast is easily computed.

    The basic law is that the work energy E done by a force F in causing a motion along distance X in the direction of the force (i.e. the radial direction) is:

    E = FX

    Dr Harold Brode (formerly of RAND Corp., R&D associates, etc.) made an argument to me by email that the energy which is absorbed from the blast wave in the act of causing damage is not really lost because it just gets converted into the directed kinetic energy of debris from the building, and the debris proceeds to move downrange.

    This argument of his is flawed in a major way, because the velocity of the debris is much less than that of the shock wave, and in any case the debris from a destroyed building gets decelerated as it bounces along the ground.

    In addition, buildings are going to be shaken and thus absorb energy from the shock front even at pressures far lower than those which will destroy a building.

    But one advantage of Dr Brode's comment is that you can look at it as a simple way to calculate the energy depletion: the kinetic energy which is gained by the debris of a house is the minimum amount of blast energy which is lost through the work done in destroying the house.

    Obviously, when a house gets destroyed not all the energy lost goes into the debris. A lot is used to do mechanical work in bending and snapping beams, joints, bricks, cement, etc., which ends up getting degraded into thermal energy without anything gaining a significant outward velocity. But there are quite a lot of studies of how fast debris moves on average for given pressures of blast wave.

    One very simple example is study of human dummies exposed to a blast wave. When the dummies are accelerated and thrown downrange by the blast wave, they deplete some energy from the blast wave, which is turned into the kinetic energy of the dummy:

    ‘We were fortunate enough at a 5 psi station in one of the 1957 shots in Nevada to photograph the time-displacement history of a 160-pound [standing] dummy, and we were able from analysis of the movies to determine the maximal velocity reached ... about 21 feet per second. This velocity developed in 0.5 second. The total displacement of the dummy was near 22 feet ... It was this piece of empirical information that helped greatly in getting an analytical “handle” on the “treatment” of man as missile.’

    – Dr Clayton S. White, who worked on nuclear weapon blast effects at Nevada test series’ Upshot-Knothole (1953), Teapot (1955) and Plumbbob (1957), Testimony to the U.S. Congressional Hearings, 22-26 June 1959, Biological and Environmental Effects of Nuclear War, U.S. Government Printing Office, 1959, pp. 364-5.

    In this example, a 72.5 kg dummy exposed to a blast wave with a peak overpressure of 5 psi was accelerated to a peak velocity of 6.4 m/s. The energy lost from the blast wave by this one human being was:

    E = (1/2)mv^2 = 1500 Joules

    lost from the blast wave.

    Notice that the person (representative of a large missile) doesn't fly downwind at supersonic velocity, but thuds to the ground after a displacement of 6.7 metres. The kinetic energy then gets converted into mechanical energy in damaging the dummy, instead of getting converted back into blast energy. Similarly when the roof or wall of a building gets blasted off, it thuds to the ground some distance downrange, and the impact causes it to break up. The energy isn't magically returned to the blast wave.

    When a lot of big buildings get smashed up by the blast, substantial amounts of energy are lost.

    The calculations I did gave a range of 1% loss per wood-frame house along a radial line from the bomb, to 5% loss per brick or masonry building. The 1% wood-frame building figure is empirically justified by the data from Hiroshima and Nagasaki.

    The result is that blast damage ranges in cities are far smaller than predicted from cube-root scaling based on unobstructed desert and ocean pressure data, particular for higher yield weapons where predicted damage distances are great (covering large residential areas).

    I have a detailed study on this problem, with a break down of figures for different types of housing and also an analysis of how the energy loss varies as a function of incident overpressure (this varies for different types of buildings, but it's not a bad approximation to treat the percentage loss as a constant regardless of incident overpressure).

    The person at fault here is Samuel Glasstone himself, it seems. He edited out several vital bits of the September 1950 "Effects of Atomic Weapons" (of which he was executive editor on an editorial board which had as its chairman Joseph O. Hirschfelder, David B. Parker, Arnold Kramish and Ralph Carlisle Smith) which stated on page 56 (in a section based on work done by John von Neumann and Frederick Reines of Los Alamos):

    [Paragraph 3.20] "... As to the detailed description of the target, not only are the structures of odd shape, but they have the additional complicating property of not being rigid. This means that they do not merely deflect the shock wave, but they also absorb energy from it at each reflection.

    [Paragraph 3.21] "The removal of energy from the blast in this manner decreases the shock pressure at any given distance from the point of detonation to a value somewhat below that which it would have in the absence of dissipative objects, such as buildings. The presence of such dissipation or diffraction makes it necessary to consider somewhat higher values of the pressure than would be required to produce a desired effect if there were only one structure set by itself on a rigid plane."


    Glasstone apparently edited out that section from further versions of the book (such as the 1957 renamed "Effects of Nuclear Weapons") because it contradicted the oversimplified statement on page 137 of the 1950 "Effects of Atomic Weapons", which vaguely claimed that:

    "The general experience in Japan provides support for the view ... that the effect of one building in shielding another from blast damage due to an atomic bomb would be small."

    Yes, it's about 1% for Japan, but that's missing the whole point!

    After the blast covers a radial line through 100 buildings, the cumulative 1% losses amount to a very big loss: (1 - 0.01)^100 = 0.366. Hence the peak overpressure is down by a factor of 2.7 after the blast wave has knocked down 100 wooden houses in a straight line.

    By just comparing one house with its neighbour, of course you don't see any difference because the difference is only 1%.

    Glasstone probably oversimplified it in later editions because he simply didn't think it through and realise that the effect of summing a lot of small % energy absorptions is cumulative and adds up to a substantial reduction in overpressure at great distances in a build up area.

    By focussing on the tiny difference between one building and the next, nothing was observed because the 1% depletion was statistically undetectable in the somewhat chaotic damage effects.

    It wasn't until Penney's analysis in 1970, two decades later, that evidence emerged that cumulative depletion of blast energy along a radial line from ground zero made substantial reductions in overpressure and damage at great distances, compared to those predicted from 1950s test data based on unobstructed terrain in deserts and over oceans.

    Kind regards,
    Nigel

     
    At 9:56 pm, Anonymous Anonymous said...

    Thank you for answering my question Nigel, much appreciated. Detailed enough for me. I look forward to more great analysis from you in the future.

    Here is a great blog with daily news on the exploding world economy, which might interest you

    http://theautomaticearth.blogspot.com/

    and another great blog on energy

    http://www.theoildrum.com/

    Wishing you the best,

    Arvinder

     
    At 10:25 pm, Blogger nige said...

    After re-reading this post on 31 May 2008, I want to emphasise that the net outward force effect from air blast is the DYNAMIC PRESSURE of the blast wave (which is a vector because it is directional - blowing radially with zero non-radial pressure) multiplied by the spherical surface area of the blast wave.

    The normal overpressure is better called the "non-directional overpressure" or non-dynamic pressure. It is a pressure which acts in all directions (basically like a charge in air pressure).

    What we are concerned with when calculating the net outward force of a blast wave is the wind or dynamic pressure, which blows in the radial direction.

     
    At 5:55 pm, Anonymous Anonymous said...

    Consider two 35mt bursts (planned warhead for Titan II) -1 air(to maximize 15 psi overpressure) and 1-ground in Moscow ,how much would be damage by blast and fire ?

     
    At 9:17 am, Blogger nige said...

    Surely the Titan II warhead was the roughly 9 Mt bomb tested as 8.9 Mt Hardtack-Oak in 1958?

    I don't see how you could have put an extremely heavy 35 Mt warhead on a Titan II missile without exceeding the payload? The missile would have had to be considerably larger to take a 20 tons or more massive warhead, and it was already the size of a small space rocket!

    The effects of blast and heat - in the open the 50% lethal range at Hiroshima was 1.3 miles, compared to 0.12 mile in the ground floor of modern concrete buildings.

    Scaling up this data to 35 megatons by the cube-root law (for diffraction damage and blast induced fires) gives (35,000/15)^{2/3} = 13-fold increase to 1.6 miles for 50% mortality in concrete buildings and 21 miles for people outdoors or in flimsy imflammable Hiroshima wooden houses full of bamboo furnishings, paper screens and easily-blast-overturned charcoal braziers which were cooking breakfast at 8:15am in Hiroshima.

    The 21 mile range would probably be reduced substantially by the cumulative energy loss of the blast in destroying successive wooden houses, but the 1.6 miles figure for people in concrete buildings is more relevant for a 35 Mt air burst over modern city buildings. The 50% lethal range for a ground surface burst would be less than 1.6 miles.

     
    At 6:17 am, Anonymous Anonymous said...

    This system terrible.My comment far exceed limited volume.So I devided my comment to several parts.

    Part1.

    Thank you.But,9-megaton yield for MK-53(not B-53 or W-53) was based on Hansen book,he assumed that Oak device was tested on full yield,but i think it actually tested at half yield.Space rockets were actually considerated as ICBMs.Hansen give false yields for Mk-21(4mt,but this warhead must have 14-15mt,because Mk-36 had 19 megatons,MK-21 tested in clean configuration at 4.5 mt,but that was only 1/3 at full yield,and clean version of Mk-36 had a yield
    of 6mt(this version actually was build(converted ) and stockpiled in very small numbers,but never deployed).Given,that 4.5/6*19=14.25mt.Mk36 was improved version of Mk21,built for military requirements for 20Mt for cratering runways with 50% probability to produce 50%damage.


    Sources for that data :

    Document 2: "Report of NSC Ad Hoc Working Group on the Technical Feasibility of a Cessation of Nuclear Testing," 27 March 1958.
    Hans Bethe chairman.That second declassification.It can be found at
    national security archive,Washinghton University Library,archive-Nuclear Vault.Section-The Making of Limited Test Ban treaty.
    http://www.gwu.edu/~nsarchive/NSAEBB/NSAEBB94/tb02.pdf.
    and Letter from Captain John H. Morse, Special Assistant to the Chairman, Atomic Energy Commission, to Lewis Strauss, Chairman, Atomic Energy Commission, 14 February 1957, Secret.At same archive,but in section :It Is Certain There Will be Many Firestorms" (1)

    New Evidence on the Origins of Overkill

    National Security Archive Electronic Briefing Book No. 108.

     
    At 6:27 am, Anonymous Anonymous said...

    Part2.
    Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized
    version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info:
    "By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt.
    For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue".
    Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm.
    There also some very nice documents.
    60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D.
    100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as
    ICBMs.

     
    At 6:28 am, Anonymous Anonymous said...

    Part2.
    Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized
    version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info:
    "By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt.
    For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue".
    Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm.
    There also some very nice documents.
    60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D.
    100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as
    ICBMs.

     
    At 8:05 am, Blogger nige said...

    Thanks!

    I bought and read Hansen's "U. S. Nuclear Weapons" 1988, and it is full of errors. He mixes up facts and make believe.

    Some of the errors which annoyed me the most were the errors in the data he gives from a preliminary document for the percentage of early fallout at the Redwing tests Zuni, Tewa, Flathead and Navajo (although he very usefully gave the correct percentage fission yields for those tests, 15, 87, 73 and 5% respectively), where he states that the water surface bursts deposited about 30% of their fallout in local fallout while for the land surface bursts it was 48-50%. These percentages of local fallout were debunked in the testimony by Dr Kellogg in the RAND Corp in the June 1957 congressional hearings "The Nature of Radioactive Fallout and Its Effects on Man": they were calculated using an incorrect conversion factor between deposited activity and dose rate. When corrected, the % in local fallout is much higher and more similar for both types of burst.

     
    At 8:11 am, Blogger nige said...

    Other errors Hansen made were the Teller-Ulam mechanism, assuming that X-rays heat up plastic foam filling the radiation channel which then turns to plasma to compress the fusion stage capsule.

    Actually, as Glasstone and Dolan's "Effects of Nuclear Weapons" stated since the 1962 edition, the X-rays coming off the primary stage have a very short mean free path and will be blocked. Filling the duct between outer casing and fusion capsule with plastic foam would prevent the H-bomb from working: it would stop the X-rays and turn that energy into a fireball which would diverge outward instead of being focussed inward upon the fusion fuel capsule. This would fail to cause efficient compression because it would turn the bomb into a "layer cake" that just pushes the fusion fuel away from the fission primary stage, instead of efficiently compressing it. Instead of plastic foam filling the X-ray duct, there is empty space allow the X-rays to be channeled effectively and ablate the fusion capsule surface, so that by recoil it gets compressed.

     
    At 8:50 am, Blogger nige said...

    After interviews with the Ivy-Mike bomb designers, Richard Rhodes corrected the situation on page 486 of "Dark Sun" (Simon and Schuster, N. Y., 1996):

    "The flux of soft X-rays from the primary would flow down the inside walls of the casing several microseconds ahead of the material shock wave from the primary. ... the steel [OUTER] casing would need to be lined with some material that would absorb the [soft X-ray] radiation and ionize to a hot plasma which could radiate X-rays [in a different direction, like a mirror] to implode the secondary."

    So what the plastic foam does is act as a mirroring surface to reflect back X-rays going toward the outer casing, instead of losing that energy by having it ablate the outer casing. What you want to do is reflect those X-rays back on the the fusion fuel capsule in the middle of the radiation channel, so they ablate that, not to ablate the inside of the outer bomb casing! Rhodes on page 501 of "Dark Sun", quoting Mike designer Harold Agnew:

    "I remember seeing the guys hammer the big, thick polyethene plastic pieces inside the casing ... They hammered the plastic into the lead with copper nails."

    The plastic foam is just one inch thick and is purely a "radiation mirror" for the X-rays; reflecting as much X-ray energy back on to the fuel capsule as possible. The plastic foam doesn't fill the entire casing, it's just a relatively thin (1" thick) layer fixed to inside of the outer case. Rhodes however was still confused and reverts to Hansen's error on page 492, where he says that the plastic foam "would expand rapidly and deliver the necessary shock [to the fusion fuel capsule]". This is untrue; the physical expansion of plastic foam and its "shocking up" into a shock wave takes far longer and exerts far less pressure than the delivery of X-ray energy.

    Plastic foam is vital to make the inside of the outer casing into a "radiation mirror" for X-rays. Instead of ablating a metal surface and wasting the energy by transforming it into mechanical kinetic energy of ablating metal vapor and recoil shock in the outer case, because of its low density (compared to a metal) the plastic foam simply heats up and re-radiates the energy it has absorbed as X-rays. This turns it into an excellent mirror for X-rays, since the incident X-ray energy is mostly re-radiated instead of being turned into mechanical shock wave.

    To understand this mechanism in slightly different context, see Glasstone and Dolan, "The Effects of Nuclear Weapons" 3rd ed., 1977:

    "Two factors affect the thermal energy radiated ... First ... a shock wave does not form so readily in the less dense air [or any less dense medium!]"

    Plastic foam is able to mirror X-rays because it is able re-radiate X-ray energy efficiently: its low density slows down the rate of shock wave formation, eliminating that mechanism for energy loss, so plastic foam merely heats up and re-radiates the energy as X-rays.

     
    At 8:54 am, Blogger nige said...

    (The plastic foam "mirroring" of X-ray radiation is vital to the Teller-Ulam design as evidenced by the declassified title of their 9 March 1951 joint Los Alamos LAMS-1225 paper: "On Heterocatalytic Detonations. I. Hydrodynamic Lenses and Radiation Mirrors".)

     
    At 9:00 am, Blogger nige said...

    (The "radiation mirrors" concept is the Teller contribution: this is the key to the whole breakthrough; Ulam's hydrodynamic lenses never worked for the shock wave from the fission primary which is too dense and slow to focus. It is absurd that the one key breakthrough, Teller's radiation mirroring, is completely misunderstood by Rhodes and others, because they don't understand that the difference in density between plastic foam and metal reduces shock wave formation and thus makes plastic into a relatively good radiation mirror.)

     
    At 9:02 am, Blogger nige said...

    Hansen also gives false descriptions of the Hiroshima and Nagasaki devices: the projectile in the gun-type Hiroshima device was a hollow cylinder of U235, not the other way around.

     
    At 9:35 am, Blogger nige said...

    Richard Rhodes also seems to be totally ignorant of nuclear weapons effects where he claims on page 509 that after the Mike shot: "Radioactive mud fell out, followed by heavy rain."

    This contradicts the thorough fallout collection data for Eniwetok lagoon in the weapon test report W. R. Heidt, Jr., E. A. Schuert, et al.; WT-615, "Nature, Intensity and. Distribution of Fallout from MIKE Shot", Project 5.4a., USNRDL, 1953. The fallout from Mike wasn't mud or heavy rain but fallout particles formed from coral grains.

    On the same page, Rhodes falsely claims that the entire crater volume of 80,000,000 tons became global fallout, when in fact only about 1% was fallout and the explosion didn't have enough energy to lift that mass: Dr Alvin C. Graves testified to the 1957 U.S. Congressional Hearings on "The Nature of Radioactive Fallout and Its Effects on Man" part 1, page 71, that approximately "a megaton of energy will lift up a tenth of a megaton of dirt." Hence 10.4 Mt Mike lifted up just ONE million tons of fallout, not 80.

    Rhodes on page 542 of "Dark Sun" reveals his complete ignorance of chemistry by claiming that the fallout was "calcium precipitated from vaporized coral". Duh! Did Rhodes ever go to elementary chemistry class and see what happens to a piece of calcium exposed to the air for a few seconds? It oxidizes into calcium oxide with the release of energy!

    Even if he didn't know that, Rhodes should have studied the facts on the fallout collected from Mike in weapon test report WT-615 or the congressional testimony by Triffet in 1959: Mike never reduced a million tons of coral to calcium metal in the first place. Long before the time the fireball had expanded enough to engulf that much coral, its temperature was just enough to reduce some of the coral to calcium oxide, CaO, which was then slaked by atmospheric moisture during the many minutes or hours of the long fallout to give calcium hydroxide (slaked lime). This is why the fallout was an irritant, and led to confusion in AEC Chairman Lewis Strauss's statement after the Marshallese and Japanese were contaminated by Bravo in March 1954.

    The AEC pointed out that skin irritation during fallout was a chemical effect of the lime in the fallout irritating skin and eyes, and stated on 11 March that the Marshallese had no beta radiation burns. The first beta burns appeared on 14 March, two weeks after exposure, as is usual for beta ray burns to skin. It made the 11 March statement look like a false statement or a cover-up.

     
    At 9:57 am, Anonymous Anonymous said...

    Thank you for this data.

    Part3.
    Amazing,insane ,but 1000-megaton warhead was not largest ever considerated.In exellent book :Project Orion .The True story of the Atomic spaceship.George Dyson (Freeman son) referce to two weapons.(I might be confused,it might be same weapon).

    1)Small:1650-ton continent -buster hanging over enemy head as detterent.Its yield must be
    approx.9 gigatons.

    "A May 1959 Air Force briefing revealed some possible military uses of Orion vehicle ,including reconnaissance and early warning,electronic countermeasures,anti-ICBM,ICBM,orbitally or Deep space weapons.Finally ,there was The Horrible weapon-1,650-ton continent -buster hanging over enemy head as deterrent.”These proposals were for 4000-ton vehicle.

    2)20,000 ton vehicle.This book and
    atomicrockets.com.
    One mission was considerated-ability for delivering a warhead so large that it would devastate a country one-third of size of the of United States.

    Given that territory of US roughly
    2000*4000km,and maximum distance for devastation by such weapons by thermal radiation,radius roughly 1000km,and yield must be-
    50-60 GT,using formula 0.68*Y^0.4.
    So that weapon more powerful than
    all nuclear weapons ever built.This bomb was named a DOOMSDAY BOMB and project DOOMSDAY ORION.
    Given that was project Orion Battleship,"This one will take the form of a space battle. In 1962 President Kennedy was shown a model of the spaceship as a last-ditch effort to keep the project alive. This abominable concept was for a ship capable of wiping out every Russian city with a population over 200,000 from orbit. Sadly the model has now been lost. Descriptions of it say it was equipped with 5 inch guns for defense, Casaba-Howitzer bombs (a directed-energy nuke), and 500 Minuteman-style 25 megaton bombs. Kennedy, like the scientists involved and any sane person, hated the idea. One year later the Limited Test Ban Treaty was signed and the project was canceled".
    Given that each 25-megaton (MK-41) weighted 10500lbs(4750kg),missiles must weighted at least 20tonnes.
    I sure that DOOMSDAY BOMB and continent-buster different weapons.
    9 gigaton blast could not a devastate a country with size of 1/3 of US,minimum 50-60Gt blast.

    All these projects were not materialised due political,but not technical factors(especially due disastrous actions of Mcnamara).
    Some information yet.
    MK-41 was considerated to be missile warheads at least three times:
    1)As alternate warhead for NAVAHO.
    2)As warhead onbourd Orion battleship.
    3)As single warhead for large Pluto
    (USAF ramjet design,also proposed armaments-2x32-inch warheads(10mt each),5-21-inch warheads (5mt each),and 16 -15 -inch warheads(1.5 mt each)).Source-Proceedings of Nuclear Propulsion Conference ,August 15-17,1962.Naval Postgraduate SCHOOL.Monterey,California.AEC.Division of Technical Information.
    Try find this report in internet.
    PLUTO weighted only 45,000 lbs and was designed for global strike.
    Another report for small Pluto design.UCRL-ID-125506.
    Weapon total load was 3000-4000lbs.It was consisted Optional configurations range from a single 32-inch diametr warhead,or pair of 21-inch diametr warheads,or as many as six
    l5-inch diameter ejectable weapons.
    In all cases total yield must be an order
    1o megatons.

     
    At 10:04 am, Anonymous Anonymous said...

    Part4.
    So I think Mk-53 yield biased understimated.There must be two possibilities.
    1)Yield total was suppresed,but total fission fraction same as at 18mt(56%).
    2)Fission fraction was suprassed from 70-85% to 55%.
    Total yield must be in 12-18mt range for 2 reasons :
    1)MK-53 was a class B and must same Y/W as Class A(60 megatons),and Class B(25 megatons).
    Mk-36 previous generation and its Y/w ratio could not be applied to Mk-53 based on that documents.
    2)CIA estimates for warheads on R-36(sS-9)was based on MK-53 and MK-41:light warhead was based on MK-53(RV-8000lbs and warhead-6500lbs,yiled-12-18mt,heavy warhead was based on Mk-41-RV weight-13000lbs,yield up to 25 megatons).

    So i sure ,that Mk-53 had a yield of 12-18 megatons.Given that 6400lbs-
    2896 kg,2896x5.3=15.348mt.Mk-53 intended have a yield of Mk-21 but in much smaller weight.


    For 35mt warhead.
    This data on DOE site.Office of declasification.Drawing Back the Curtain of Secrecy
    Restricted Data Declassification Policy, 1946 to the Present
    RDD-1

    June 1, 1994
    U.S. Department of Energy, Office of Declassification.

    Section D.Thermonuclear weapons.

    9. The fact that the yield-to-weight ratios of the new class of weapons would be more than twice that which can now be achieved in the design of very high yield weapons using previously developed concepts. (63-1).
    10. The United States, without further testing, can develop a warhead of 50-60 Mt for B-52 delivery." (63-3)
    11. "... some improvement in high yield weapons design could be achieved and that new warheads -- for example a 35 Mt warhead for our Titan II -- based on these improvements could be stockpiled with confidence." (63-3).
    Another source for 35Mt warhead Mcnamara himself.Time.Atomic arsenal.23 august,1963.

    McNamara, while admitting that the treaty, by barring atmospheric testing, would prevent the U.S. from developing a 100-megaton bomb, told the Senators that without any testing the U.S. "can develop a warhead with a yield of 50 to 60 megatons for a B-52 delivery," and with underground tests could develop "a 35-megaton warhead for Titan II."
    I think,that you,Nigel known about that weapon.
    Initial plan was for deploying-275 Titan II,2915 Minueman,including advanced model with 5mt warhead and 1319 Skybolt.
    So I think ,that 35mt was sucessor of Mk-53 and 55Mt warhead sucessor of Mk-41.These weapons have Y/W around 11kt/kg.There also consideration of Advanced Titan II with payload increased to 6t to carriage 55Mt warhead.See :Desmond Ball,Politics and force levels.1980.And refences therein.On Advanced Titan ,The Hickey Study,reference in this book.On 275 TitanII and 2915 Minuteman-Package Plans for Strategic Retaliatory forces ,3 July,1961.
    Imagine if Mcnamara never been secretary of defense and that plans were executed.
    USAF also studied thrust-increased Titan II ,capable carriyng a 100Mt(40000 lbs) warhead.

    I be very happy for your coments on DOOMSAY BOMB,because my calculation based on Glasstone book.I think ,that 35mt warhead used a U-235 around a fusion fuel.

     
    At 12:21 pm, Blogger nige said...

    Insane is a good word to describe the 1650-ton, 9 gigaton doomsday machine put in orbit over an enemy using Project Orion's nuclear explosion powered spaceship.

    Once you get to gigaton yields, the fire hazard cam become serious. Wood won't ignite directly for yields below 100 Mt because the flash is so brief it just ablates the 0.1 mm of outer surface into a shielding cloud of smoke, which prevents fire as seen in nuclear tests, so you need litter like dry leaves or newspaper to ignite as tinder, then that has to ignite some kind of kindling like cardboard or twigs, and then the kindling if below wood will start a fire: normally this chain is broken and you don't get fires except as in Hiroshima from blast overturned charcoal cooking braziers in paper screen and bamboo furnishing in wooden houses, or with WWII air raid "blackout curtains" (dark coloured curtains which absorb a lot more thermal energy than modern light coloured curtains).

    But for gigaton bombs (thousands of megaton), the rate of thermal energy release is too low over large areas to ablate wood; instead the wood is slowly heated and may reach ignition temperature without the need for tinder and kindling in convenient proximity.

    In this case, you do get widespread fire hazards, which is what probably caused a climate catastrophe which killed the large cold-blooded dinosaurs (but not smaller cold blooded relatives like tortoises, etc.).

    I think that really is an insane kind of weapon, which is why Herman Kahn used such devices as examples of "doomsday" devices in On Thermonuclear War, where the theory of deterrence is applied too far (if something then goes wrong you are then in a real pickle, and make no mistake).

    In my latest and possibly last blog post, I've quoted Dyson's book Disturbing the Universe where he was designing extra-clean (low fission yield percentage) bombs for Project Orion and ended up helping Samuel Cohen's neutron bomb project. Dyson is extremely simplistic in everything, although he gets a mature viewpoint to some extent by applying his simplistic analysis from more than one direction. He applies simplistic reasoning to both opposite viewpoints, and by combining the results manages often to get a reasoned evaluation of the clearest arguments on each side of an argument.

     
    At 12:31 pm, Blogger nige said...

    The best example is the contrast between Dyson's account of quantum mechanics in his Scientific American article of 1958, "Innovation in Physics", with the account of his arguments with Richard P. Feynman over Feynman's "path integrals" approach in his 1979 book "Disturbing the Universe". Dyson in the 1958 article says quantum mechanics is purely mathematical with nothing pictorial to understand; Dyson in the 1979 book says that in 1948 after Pocono he and Feynman argued about this with Feynman hitting back and saying Einstein's grand unified theory failed because it was just equations with no mechanistic pictorial physics to it (Feynman's famous "diagrams" of quantum field interactions between fundamental particles).

    Project Orion was of course cancelled by President Kennedy after Dyson submitted crazy blueprints to Kennedy for a "Star Wars" battle cruiser spaceship. Kennedy was appalled, cancelled funding, and made sure Project Orion was buried by signing the Atmospheric Nuclear Test Ban Treaty to prevent it ever being tested with nuclear explosives. So NASA went the other way with the plaque on the Moon reading: "We came in Peace."

    Maybe if Dyson and his comrades had a bit more insight marketing ideas, they wouldn't have tried to sell Orion to Kennedy as a space based warship, but just as a very cheap way to get to Mars, burning up some of the nuclear weapons stockpiles on the way.

     
    At 1:01 pm, Blogger nige said...

    Gigaton weapons effects (1000 megatons or more) differ from nuclear weapons below 100 Mt mainly in the thermal and fireball phenomena. At the upper limit of gigaton "doomsday" weapons yields, you get into the kind of global "nuclear winter" phenomena from the K-T impact 65 million years ago which ended the reign of large cold-blooded dinosaurs and gave warm-blooded mammals a chance.

    The thermal radiation emission occurs so slowly from yields above a gigaton that it doesn't ablate the surface of wood into a fire-preventing "smoke screen" over large areas like a brief thermal pulse from below 100 Mt. Instead, above a gigaton, the thermal pulse is like a long pulse of extra-intense sunlight which can gradually warm up wood to depth (not just surface heating which causes ablation), and cause wood to ignite directly.

    The fireball is also bigger than the 7 km scale height of the atmosphere so that you get massive differences in air density between the top and bottom, causing rapid "ballistic" fireball rise rather than the normal buoyant rise that you get from nuclear tests below 100 Mt at low altitudes.

     
    At 1:08 pm, Blogger nige said...

    The many gigatons of the K-T event did cause climatic effects that killed the large cold-blooded dinosaurs and many ocean species which were temperature-sensitive, but it didn't kill cold blooded smaller reptiles or mammals or many species which survived and evolved happily afterwards...

    So I think even K-T impact events are exaggerated in their effects. There is evidence that all the large mammals today have evolved from smaller ones left after the K-T impact event. E.g., there were no large mammals 65 million years ago; all the surviving mammals were very small and have since evolved into larger sized mammals. However, the simple, very low technology techniques even mouse sized mammals used to survive the K-T impact could be employed by intelligent people to survive a gigaton explosion.

    Like a half filled glass of water, the doom-mongers would view the K-T impact event as an example of the threat of extinction, as if the extinction of crazy big dinosaurs was a bad thing that extrapolates to human extinction threats. Others would see it differently and consider the survival of mammals under such circumstances as evidence of the difficulty in exterminating life and thus the survival possibilities for humans even in the worst events that have ever occurred in the history of the planet.

     
    At 1:21 pm, Blogger nige said...

    Because the earth rotates, any global smoke cloud that blocks sunlight and causes "nuclear winter" will be unevenly heated from this factor alone: the sunset and sunrise effects will cause expansion of air and thus winds to unevenly disperse smoke, allowing natural convection to occur, so that rain can be generated.

    The burning of vegetation is accompanied by the emission not just of soot and CO_2 but also of water since the mass of most vegetation contains a lot of water, so you will get self-induced rainout when the soot and water vapour rise to high altitudes when the soot absorbs vapour and forms large "black rain" droplets which settle out under gravity, like the black rain at Hiroshima (this is something the doom-mongers are quiet about in the climatic effects context, although happy to hype in false radioactive hazard context; ignoring the low specific radioactivity content of the rain since the radioactive mushroom cloud was blown miles away from the target area half an hour or more before the firestorm even started).

    Wind and rainfall will thus disperse and precipitate most of the soot within a week or two. That's a long-enough "winter" spell to kill large cold-blooded dinosaurs which can't take shelter because of their size and which can't metabolize food in low temepratures, so they just come to a half and die; but it's not enough to kill many species which can respond better to low temperatures.

     
    At 4:26 pm, Anonymous Anonymous said...

    Thank you very much.

    But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?I think that 9 gigatons not enough.Doomsday bomb and coninent-buster must be different weapons.Various orions,lagest was 8000,000 ton in weight.Both military and civilian applications.


    Apart Orion Mcnamara killed various defensive and offensive systems:Pluto,Dynasoar,B-70,WS-125A,Skybolt,AICBM,Advanced Minuteman,restricted depolyement of strategic forces,F-108,F-12,MK-16(MIRV for Titan II),Sentinel,Bambi and etc,and etc.

     
    At 11:49 pm, Blogger nige said...

    "But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?"

    Whether the bomb is a space burst or a low altitude burst, at that yield the fireball exceeds the scale height of the atmosphere (7 km) by a large factor so the fireball undergoes ballistic rise as described by Dolan in ENW and CNW. It goes up very quickly, and it radiates for a long time, so it basically radiates from extremely high altitude. It could certainly expose very large areas, although if there were heavy cloud cover between the fireball and the ground, that would mitigate the thermal effects. Air blast would have a very long duration at such yields, but even so it wouldn't be that impressive for a high altitude or space burst owing to the low density of the air: thermal radiation would be the primary effect carrying most of the energy.

    If you are asking for the yield needed to "devastate" such an area, you need to define the type of burst (e.g. altitude of burst) and what you mean by "devastate", e.g. what the target is (wood frame Japanese houses with blackout curtains in the windows etc., the flammable 1953 "Encore" nuclear test house full of newspapers with a big window facing ground zero with an unobstructed line-of-sight view, or modern steel and concrete city buildings?).

    Many media people and politicians would say that a 1 kt nuclear explosion anywhere in America would "devastate" just about the whole country financially and by fallout contamination, citing the number of hospital beds in the USA compared to the maximum possible numbers of burns casualties, the expense of 100% effective decontamination of large fallout areas, and so on.

     
    At 12:03 am, Blogger nige said...

    Very long thermal pulses result from gigaton yields at high altitude. This means

    (1) It becomes possible for the heat pulse to actually cause solid wood to heat up into its depth so it can ignite eventually where the yield is high enough (instead of having merely the outer tenth of a millimetre "blown off" as smoke, without fire).

    (2) The long duration of thermal energy delivery (minutes) gives people more time to take cover. Not "ducking and covering" becomes a non-option. Everyone has time to evade a large fraction of the thermal pulse if they have some non-flamable shelter available. Over the widest area (out to the horizon as seen from the edge of the high altitude X-ray "pancake" fireball), the thermal pulse is just like the sun but more intense. So people will be able to avoid injury by taking protective measures that they would take against sunburn, such as going indoors or behind anything that gives some shade.

     
    At 5:29 pm, Blogger nige said...

    The biggest nuclear weapon yield I have seen thermal ignition predictions published for is 1,000 Mt (1 gigaton) in volume 1 of Robert U. Ayres's Hudson Institute report HI-519-RR, Environmental Effects of Nuclear Weapons, Fig. 2.1, page 2-3. (The reason wny Ayres considered yields up to 1,000 Mt in this report was probably that the director of HI was Herman Kahn, who was interested in "doomsday devices".)

    Ayres finds that 1,000 Mt detonated at 36.6 km altitude might produce 7 cal/cm^2 thermal flux at up to 265 km away on a clear day. However, on the previous page Ayres shows that the energy needed for ignition of newspaper increases with yield, from 7 cal/cm^2 at 1 Mt to 11 at 10 Mt and 25 at 100 Mt. Although ignition of wood is possible for gigaton yields due to the long duration of the heat pulse, you still need a lot of energy to achieve ignition temperatures, which limits the distance.

     
    At 8:44 am, Anonymous Anonymous said...

    Document,that avaible stated that at this time (1957) time another Class C weapon was under study-18 megatons in 7,000-pound weight.
    AFSWC,Technical report on nuclear weapon development,1957.

     
    At 8:57 am, Anonymous Anonymous said...

    "A May 1959 Air Force briefing revealed some "possible military uses of
    the Orion Vehicle," including reconnaissance and early-warning,
    electronic countermeasures ("possibility to get a terrific number of
    jammers over a given area"), anti-ICBM ("possibility of putting many
    eary intercept missiles in orbit awaiting use"), and "ICBM, orbital,
    or deep space weapons -- orders of magnitude increase in warhead
    weights -- clustered warheads -- launch platforms, etc." Finally,
    tere was "the Horrible weapon -- 1,650 -ton continent buster hanging
    over the enemy's head as a deterrent.


    USAF Orion was a special model.

    4,000-short gross weight.

    250 feet in lengt and 85 in diametr.

    ORION ICBM mean a ICBM with 2,000-s.ton throw-weight ,there would be bunch of devices around 1000mt.

    Continent-buster=Doomsday bomb.

    Weight 1650 short tons.Yield would be >20 ,000 megatons.

    It would be exploded over USSr at 400 km altitutede literally turning USSR to Hiroshima.

     
    At 8:40 pm, Anonymous Anonymous said...

    Hello, Nige.
    Are you still visiting this blog so I can ask about some little things that confuse me ?

     

    Post a Comment

    << Home