There is now a relatively long introduction at the top of this blog, due to the present nuclear threat caused by disarmament and arms control propaganda, and the dire need to get the facts out past pro-Russian media influencers or loony mass media which has never cared about nuclear and radiation effects facts, so please scroll down to see blog posts. The text below in blue is hyperlinked (direct to reference source materials, rather than numbered and linked to reference at the end of the page) so you can right-click on it and open in a new tab to see the source. This page is not about opinions, it provides censored out facts that debunk propaganda, but for those who require background "authority" nonsense on censored physics facts, see stuff here or here. Regarding calling war-mongering, world war causing, terrorism-regime-supporting UK disarmers of the 20th century "thugs" instead of "kind language": I was put through the Christianity grinder as a kid so will quote Jesus (whom I'm instructed to follow), Matthew 23:33: "Ye serpents, ye generation of vipers, how can ye escape the damnation of Hell?" The fake "pacifist" thugs will respond with some kindly suggestion that this is "paranoid" and that "Jesus was rightfully no-platformed for his inappropriate language"! Yeah, you guys would say that, wouldn't ya. Genuine pacifism requires credible deterrence! Decent people seem to be very confused about the facts of this. Jesus did not say "disarm to invite your annihilation by terrorists". You can't "forgive and forget" when the enemy is still on the warpath. They have to be stopped, either by deterrence, force, defense, or a combination of all these.
https://hbr.org/1995/05/why-the-news-is-not-the-truth/ (Peter Vanderwicken in the Harvard Business Review Magazine, May-June 1995): "The news media and the government are entwined in a vicious circle of mutual manipulation, mythmaking, and self-interest. Journalists need crises to dramatize news, and government officials need to appear to be responding to crises. Too often, the crises are not really crises but joint fabrications. The two institutions have become so ensnared in a symbiotic web of lies that the news media are unable to tell the public what is true and the government is unable to govern effectively. That is the thesis advanced by Paul H. Weaver, a former political scientist (at Harvard University), journalist (at Fortune magazine), and corporate communications executive (at Ford Motor Company), in his provocative analysis entitled News and the Culture of Lying: How Journalism Really Works ... The news media and the government have created a charade that serves their own interests but misleads the public. Officials oblige the media’s need for drama by fabricating crises and stage-managing their responses, thereby enhancing their own prestige and power. Journalists dutifully report those fabrications. Both parties know the articles are self-aggrandizing manipulations and fail to inform the public about the more complex but boring issues of government policy and activity. What has emerged, Weaver argues, is a culture of lying. ... The architect of the transformation was not a political leader or a constitutional convention but Joseph Pulitzer, who in 1883 bought the sleepy New York World and in 20 years made it the country’s largest newspaper. Pulitzer accomplished that by bringing drama to news—by turning news articles into stories ... His journalism took events out of their dry, institutional contexts and made them emotional rather than rational, immediate rather than considered, and sensational rather than informative. The press became a stage on which the actions of government were a series of dramas. ... The press swarmed on the story, which had all the necessary dramatic elements: a foot-dragging bureaucracy, a study finding that the country’s favorite fruit was poisoning its children, and movie stars opposing the pesticide. Sales of apples collapsed. Within months, Alar’s manufacturer withdrew it from the market, although both the EPA and the Food and Drug Administration stated that they believed Alar levels on apples were safe. The outcry simply overwhelmed scientific evidence. That happens all too often, Cynthia Crossen argues in her book Tainted Truth: The Manipulation of Fact in America. ... Crossen writes, “more and more of the information we use to buy, elect, advise, acquit and heal has been created not to expand our knowledge but to sell a product or advance a cause.” “Most members of the media are ill-equipped to judge a technical study,” Crossen correctly points out. “Even if the science hasn’t been explained or published in a U.S. journal, the media may jump on a study if it promises entertainment for readers or viewers. And if the media jump, that is good enough for many Americans.” ... A press driven by drama and crises creates a government driven by response to crises. Such an “emergency government can’t govern,” Weaver concludes. “Not only does public support for emergency policies evaporate the minute they’re in place and the crisis passes, but officials acting in the emergency mode can’t make meaningful public policies. According to the classic textbook definition, government is the authoritative allocation of values, and emergency government doesn’t authoritatively allocate values.” (Note that Richard Rhodes' Pulitzer prize winning books such as The making of the atomic bomb which uncritically quote Hiroshima firestorm lies and survivors nonsense about people running around without feet, play to this kind of emotional fantasy mythology of nuclear deterrence obfuscation so loved by Uncle Sam's folk.)
This means that they can invade territory with relative impunity, since the West won't deter such provocations by flexible response - the aim of Russia is to push the West into a policy of massive retaliation of direct attacks only, and then use smaller provocations instead - and Russia can then use its tactical nuclear weapons to "defend" its newly invaded territories by declaring them to now be part of Mother Russia and under Moscow's nuclear umbrella. Russia has repeatedly made it clear - for decades - that it expects a direct war with NATO to rapidly escalate into nuclear WWIII and it has prepared civil defense shelters and evacuation tactics to enable it. Herman Kahn's public warnings of this date back to his testimony to the June 1959 Congressional Hearings on the Biological and Environmental Effects of Nuclear War, but for decades were deliberately misrepresented by most media outlets. President Kennedy's book "Why England Slept" makes it crystal clear how exactly the same "pacifist" propaganda tactics in the 1930s (that time it was the "gas bomb knockout blow has no defense so disarm, disarm, disarm" lie) caused war, by using fear to slow credible rearmament in the face of state terrorism. By the time democracies finally decided to issue an ultimatum, Hitler had been converted - by pacifist appeasement - from a cautious tester of Western indecision, into an overconfident aggressor who simply ignored last-minute ultimatums.
Glasstone and Dolan's 1977 Effects of Nuclear Weapons (US Government) is written in a highly ambiguous fashion (negating nearly every definite statement with a deliberately obfuscating contrary statement to leave a smokescreen legacy of needless confusion, obscurity and obfuscation), omits nearly all key nuclear test data and provides instead misleading generalizations of data from generally unspecified weapon designs tested over 60 years ago which apply to freefield measurements on unobstructed radial lines in deserts and oceans. It makes ZERO analysis of the overall shielding of radiation and blast by their energy attenuation in modern steel and concrete cities, and even falsely denies such factors in its discussion of blast in cities and in its naive chart for predicting the percentage of burns types as a function of freefield outdoor thermal radiation, totally ignoring skyline shielding geometry (similar effects apply to freefield nuclear radiation exposure, despite vague attempts to dismiss this by non-quantitative talk about some scattered radiation arriving from all angles). It omits the huge variations in effects due to weapon design e.g. cleaner warhead designs and the tactical neutron bomb. It omits quantitative data on EMP as a function of burst yield, height and weapon design.
It omits most of the detailed data collected from Hiroshima and Nagasaki on the casualty rates as a function of type of building or shelter and blast pressure. It fails to analyse overall standardized casualty rates for different kinds of burst (e.g. shallow underground earth penetrators convert radiation and blast energy into ground shock and cratering against hard targets like silos or enemy bunkers). It omits a detailed analysis of blast precursor effects. It omits a detailed analysis of fallout beta and gamma spectra, fractionation, specific activity (determining the visibility of the fallout as a function of radiation hazard, and the mass of material to be removed for effective decontamination), and data which does exist on the effect of crater soil size distribution upon the fused fallout particle size distribution (e.g. tests like Small Boy in 1962 on the very fine particles at Frenchman Flats gave mean fallout particle sizes far bigger than the pre-shot soil, proving that - as for Trinitite - melted small soil particles fuse together in the fireball to produce larger fallout particles, so the pre-shot soil size distribution is irrelevant for fallout analysis).
By generally (with few exceptions) lumping "effects" of all types of bursts together into chapters dedicated to specific effects, it falsely gives the impression that all types of nuclear explosions produce similar effects with merely "quantitative differences". This is untrue because air bursts eliminate fallout casualties entirely, while slight burial (e.g. earth penetrating warheads) eliminates thermal (including fires and dust "climatic nuclear winter" BS), the initial radiation and severe blast effects, while massively increasing ground shock, and the same applies to shallow underwater bursts. So a more objective treatment to credibly deter all aggression MUST emphasise the totally different collateral damage effects, by dedicating chapters to different kinds of burst (high altitude/space bursts, free air bursts, surface bursts, underground bursts, underwater bursts), and would include bomb design implications on these effects in detail. A great deal of previously secret and limited distributed nuclear effects data has been declassified since 1977, and new research has been done. Our objectives in this review are: (a) to ensure that an objective independent analysis of the relevant nuclear weapons effects facts is placed on the record in case the currently, increasingly vicious Cold War 2.0 escalates into some kind of limited "nuclear demonstration" by aggressors to try to end a conventional war by using coercive threats, (b) to ensure the lessons of tactical nuclear weapon design for deterring large scale provocations (like the invasions of Belgium in 1914 and Poland in 1939 which triggered world wars) are re-learned in contrast to Dulles "massive retaliation" (incredible deterrent) nonsense, and finally (c) to provide some push to Western governments to "get real" with our civil defense, to try to make credible our ageing "strategic nuclear deterrent". We have also provided a detailed analysis of recently declassified Russian nuclear warhead design data, shelter data, effects data, tactical nuclear weapons employment manuals, and some suggestions for improving Western thermonuclear warheads to improve deterrence.
‘The evidence from Hiroshima indicates that blast survivors, both injured and uninjured, in buildings later consumed by fire [caused by the blast overturning charcoal braziers used for breakfast in inflammable wooden houses filled with easily ignitable bamboo furnishings and paper screens] were generally able to move to safe areas following the explosion. Of 130 major buildings studied by the U.S. Strategic Bombing Survey ... 107 were ultimately burned out ... Of those suffering fire, about 20 percent were burning after the first half hour. The remainder were consumed by fire spread, some as late as 15 hours after the blast. This situation is not unlike the one our computer-based fire spread model described for Detroit.’
- Defense Civil Preparedness Agency, U.S. Department of Defense, DCPA Attack Environment Manual, Chapter 3: What the Planner Needs to Know About Fire Ignition and Spread, report CPG 2-1A3, June 1973, Panel 27.
The Effects of the Atomic Bomb on Hiroshima, Japan, US Strategic Bombing Survey, Pacific Theatre, report 92, volume 2 (May 1947, secret):
Volume one, page 14:
“... the city lacked buildings with fire-protective features such as automatic fire doors and automatic sprinkler systems”, and pages 26-28 state the heat flash in Hiroshima was only:
“... capable of starting primary fires in exposed, easily combustible materials such as dark cloth, thin paper, or dry rotted wood exposed to direct radiation at distances usually within 4,000 feet of the point of detonation (AZ).”
Volume two examines the firestorm and the ignition of clothing by the thermal radiation flash in Hiroshima:
Page 24:
“Scores of persons throughout all sections of the city were questioned concerning the ignition of clothing by the flash from the bomb. ... Ten school boys were located during the study who had been in school yards about 6,200 feet east and 7,000 feet west, respectively, from AZ [air zero]. These boys had flash burns on the portions of their faces which had been directly exposed to rays of the bomb. The boys’ stories were consistent to the effect that their clothing, apparently of cotton materials, ‘smoked,’ but did not burst into flame. ... a boy’s coat ... started to smoulder from heat rays at 3,800 feet from AZ.” [Contrast this to the obfuscation and vagueness in Glasstone, The Effects of Nuclear Weapons!]
Page 88:
“Ignition of the City. ... Only directly exposed surfaces were flash burned. Measured from GZ, flash burns on wood poles were observed at 13,000 feet, granite was roughened or spalled by heat at 1,300 feet, and vitreous tiles on roofs were blistered at 4,000 feet. ... six persons who had been in reinforced-concrete buildings within 3,200 feet of air zero stated that black cotton blackout curtains were ignited by radiant heat ... dark clothing was scorched and, in some cases, reported to have burst into flame from flash heat [although as the 1946 unclassified USSBS report admits, most immediately beat the flames out with their hands without sustaining injury, because the clothing was not drenched in gasoline, unlike peacetime gasoline tanker road accident victims]
“... but a large proportion of over 1,000 persons questioned was in agreement that a great majority of the original fires was started by debris falling on kitchen charcoal fires, by industrial process fires, or by electric short circuits. Hundreds of fires were reported to have started in the centre of the city within 10 minutes after the explosion. Of the total number of buildings investigated [135 buildings are listed] 107 caught fire, and in 69 instances, the probable cause of initial ignition of the buildings or their contents was as follows: (1) 8 by direct radiated heat from the bomb (primary fire), (2) 8 by secondary sources, and (3) 53 by fire spread from exposed [wooden] buildings.”
ABOVE: "missile gap" propaganda debunked by secret 1970s data; Kennedy relied on US nuclear superiority. Using a flawed analysis of nuclear weapons effects on Hiroshima - based on lying unclassified propaganda reports and ignorant dismissals of civil defense shelters in Russia (again based on Hiroshima propaganda by groves in 1945) - America allowed Russian nuclear superiority in the 1970s. Increasingly, the nuclear deterrent was used by Russia to stop the West from "interfering" with its aggressive invasions and wars, precisely Hitler's 1930s strategy with gas bombing knockout-blow threats used to engineer appeasement. BELOW: H-bomb effects and design secrecy led to tragic mass media delusions, such as the 18 February 1950 Picture Post claim that the H-bomb can devastate Australia (inspiring the Shute novel and movie "On the Beach" and also other radiation scams like "Dr Strangelove" to be used by Russia to stir up anti Western disarmament movement to help Russia win WWIII). Dad was a Civil Defense Corps Instructor in the UK when this was done (the civil defense effectiveness and weapon effects facts on shelters at UK and USA nuclear tests were kept secret and not used to debunk lying political appeasement propaganda tricks in the mass media by sensationalist "journalists" and Russian "sputniks"):
Message to mass-media journalists: please don't indulge in lying "no defence" propaganda as was done by most of the media in previous pre-war crises!
The basic fact is that nuclear weapons can deter/stop invasions unlike the conventional weapons that cause mass destruction, and nuclear collateral damage is eliminated easily for nuclear weapons by using them on military targets, since for high yields at collateral damage distances all the effects are sufficiently delayed in arrival to allow duck and cover to avoid radiation and blast wind/flying debris injuries (unlike the case for the smaller areas affected by smaller yield conventional weapons, where there is little time on seeing the flash to duck and cover to avoid injury), and as the original 1951 SECRET American Government "Handbook on Capabilities of Atomic Weapons" (limited report AD511880L, forerunner to today's still secret EM-1) stated in Section 10.32:
"PERHAPS THE MOST IMPORTANT ITEM TO BE REMEMBERED WHEN ESTIMATING EFFECTS ON PERSONNEL IS THE AMOUNT OF COVER ACTUALLY INVOLVED. ... IT IS OBVIOUS THAT ONLY A FEW SECONDS WARNING IS NECESSARY UNDER MOST CONDITIONS TO TAKE FAIRLY EFFECTIVE COVER. THE LARGE NUMBER OF CASUALTIES IN JAPAN RESULTED FOR THE MOST PART FROM THE LACK OF WARNING."
As for Hitler's stockpile of 12,000 tons of tabun nerve gas, whose strategic and also tactical use was deterred by proper defences (gas masks for all civilians and soldiers, as well as UK stockpiles of fully trial-tested deliverable biological agent anthrax and mustard gas retaliation capacity), it is possible to deter strategic nuclear escalation to city bombing, even within a world war with a crazy terrorist, if all the people are protected by both defence and deterrence.
J. R. Oppenheimer (opposing Teller), February 1951: "It is clear that they can be used only as adjuncts in a military campaign which has some other components, and whose purpose is a military victory. They are not primarily weapons of totality or terror, but weapons used to give combat forces help they would otherwise lack. They are an integral part of military operations. Only when the atomic bomb is recognized as useful insofar as it is an integral part of military operations, will it really be of much help in the fighting of a war, rather than in warning all mankind to avert it." (Quotation: Samuel Cohen, Shame, 2nd ed., 2005, page 99.)
‘The Hungarian revolution of October and November 1956 demonstrated the difficulty faced even by a vastly superior army in attempting to dominate hostile territory. The [Soviet Union] Red Army finally had to concentrate twenty-two divisions in order to crush a practically unarmed population. ... With proper tactics, nuclear war need not be as destructive as it appears when we think of [World War II nuclear city bombing like Hiroshima]. The high casualty estimates for nuclear war are based on the assumption that the most suitable targets are those of conventional warfare: cities to interdict communications ... With cities no longer serving as key elements in the communications system of the military forces, the risks of initiating city bombing may outweigh the gains which can be achieved. ...
‘The elimination of area targets will place an upper limit on the size of weapons it will be profitable to use. Since fall-out becomes a serious problem [i.e. fallout contaminated areas which are so large that thousands of people would need to evacuate or shelter indoors for up to two weeks] only in the range of explosive power of 500 kilotons and above, it could be proposed that no weapon larger than 500 kilotons will be employed unless the enemy uses it first. Concurrently, the United States could take advantage of a new development which significantly reduces fall-out by eliminating the last stage of the fission-fusion-fission process.’
- Dr Henry Kissinger, Nuclear Weapons and Foreign Policy, Harper, New York, 1957, pp. 180-3, 228-9. (Note that sometimes the "nuclear taboo" issue is raised against this analysis by Kissenger: if anti-nuclear lying propaganda on weapons effects makes it apparently taboo in the Western pro-Russian disarmament lobbies to escalate from conventional to tactical nuclear weapons to end war as on 6 and 9 August 1945, then this "nuclear taboo" can be relied upon to guarantee peace for our time. However, this was not only disproved by Hiroshima and Nagasaki, but by the Russian tactical nuclear weapons reliance today, the Russian civil defense shelter system detailed on this blog which showed they believed a nuclear war survivable based on the results of their own nuclear tests, and the use of Russian nuclear weapons years after Kissinger's analysis was published and criticised, for example their 50 megaton test in 1961 and their supply of IRBM's capable of reaching East Coast mainland USA targets to the fanatical Cuban dictatorship in 1962. So much for the "nuclear taboo" as being any more reliable than Chamberlain's "peace for our time" document, co-signed by Hitler on 30 September 1938! We furthermore saw how Russia respected President Obama's "red line" for the "chemical weapons taboo": Russia didn't give a toss about Western disarmament thugs prattle about what they think is a "taboo", Russia used chlorine and sarin in Syria to keep Assad the dictator and they used Novichok to attack and kill in the UK in 2018, with only diplomatic expulsions in response. "Taboos" are no more valid to restrain madmen than peace treaties, disarmament agreements, Western CND books attacking civil defense or claiming that nuclear war is the new 1930s gas war bogyman, or "secret" stamps on scientific facts. In a word, they're crazy superstitions.)
All of this data should have been published to inform public debate on the basis for credible nuclear deterrence of war and civil defense, PREVENTING MILLIONS OF DEATHS SINCE WWII, instead of DELIBERATELY allowing enemy anti-nuclear and anti-civil defence lying propaganda from Russian supporting evil fascists to fill the public data vacuum, killing millions by allowing civil defence and war deterrence to be dismissed by ignorant "politicians" in the West, so that wars triggered by invasions with mass civilian casualties continue today for no purpose other than to promote terrorist agendas of hate and evil arrogance and lying for war, falsely labelled "arms control and disarmament for peace":
"Controlling escalation is really an exercise in deterrence, which means providing effective disincentives to unwanted enemy actions. Contrary to widely endorsed opinion, the use or threat of nuclear weapons in tactical operations seems at least as likely to check [as Hiroshima and Nagasaki] as to promote the expansion of hostilities [providing we're not in a situation of Russian biased arms control and disarmament whereby we've no tactical weapons while the enemy has over 2000 neutron bombs thanks to "peace" propaganda from Russian thugs]." - Bernard Brodie, pvi of Escalation and the nuclear option, RAND Corp memo RM-5444-PR, June 1965.
ABOVE: Example of a possible Russian 1985 1st Cold War SLBM first strike plan. The initial use of Russian SLBM launched nuclear missiles from off-coast against command and control centres (i.e. nuclear explosions to destroy warning satellite communications centres by radiation on satellites as well as EMP against ground targets, rather than missiles launched from Russia against cities, as assumed by 100% of the Cold War left-wing propaganda) is allegedly a Russian "fog of war" strategy. Such a "demonstration strike" is aimed essentially at causing confusion about what is going on, who is responsible - it is not quick or easy to finger-print high altitude bursts fired by SLBM's from submerged submarines to a particular country because you don't get fallout samples to identify isotopic plutonium composition. Russia could immediately deny the attack (implying, probably to the applause of the left-wingers that this was some kind of American training exercise or computer based nuclear weapons "accident", similar to those depicted in numerous anti-nuclear Cold War propaganda films). Thinly-veiled ultimatums and blackmail follow. America would not lose its population or even key cities in such a first strike (contrary to left-wing propaganda fiction), as with Pearl Harbor in 1941; it would lose its complacency and its sense of security through isolationism, and would either be forced into a humiliating defeat or a major war.
Before 1941, many warned of the risks but were dismissed on the basis that Japan was a smaller country with a smaller economy than the USA and war was therefore absurd (similar to the way Churchill's warnings about European dictators were dismissed by "arms-race opposing pacifists" not only in the 1930s, but even before WWI; for example Professor Cyril Joad documents in the 1939 book "Why War?" his first hand witnessing of Winston Churchill's pre-WWI warning and call for an arms-race to deter that war, as dismissed by the sneering Norman Angell who claimed an arms race would cause a war rather than avert one by bankrupting the terrorist state). It is vital to note that there is an immense pressure against warnings of Russian nuclear superiority even today, most of it contradictory. E.g. the left wing and Russian-biased "experts" whose voices are the only ones reported in the Western media (traditionally led by "Scientific American" and "Bulletin of the Atomic Scientists"), simultaneously claim Russia imposes such a terrible SLBM and ICBM nuclear threat that we must desperately disarm now, while also claiming that Russian tactical nuclear weapons probably won't work so aren't a threat that needs to be credibly deterred! This only makes sense as Russian siding propaganda. In similar vein, Teller-critic Hans Bethe also used to falsely "dismiss" Russian nuclear superiority by claiming (with quotes from Brezhnev about the peaceful intentions of Russia) that Russian delivery systems are "less accurate" than Western missiles (as if accuracy has anything to do with high altitude EMP strikes, where the effects cover huge areas, or large city targets. Such claims would then by repeatedly endlessly in the Western media by Russian biased "journalists" or agents of influence, and any attempt to point out the propaganda (i.e. he real world asymmetry: Russia uses cheap countervalue targetting on folk that don't have civil defense, whereas we need costly, accurate counterforce targetting because Russia has civil defense shelters that we don't have) became a "Reds under beds" argument, implying that the truth is dangerous to "peaceful coexistence"!
“Free peoples ... will make war only when driven to it by tyrants. ... there have been no wars between well-established democracies. ... the probability ... that the absence of wars between well-established democracies is a mere accident [is] less than one chance in a thousand. ... there have been more than enough to provide robust statistics ... When toleration of dissent has persisted for three years, but not until then, we can call a new republic ‘well established.’ ... Time and again we observe authoritarian leaders ... using coercion rather than seeking mutual accommodation ... Republican behaviour ... in quite a few cases ... created an ‘appeasement trap.’ The republic tried to accommodate a tyrant as if he were a fellow republican; the tyrant concluded that he could safely make an aggressive response; eventually the republic replied furiously with war. The frequency of such errors on both sides is evidence that negotiating styles are not based strictly on sound reasoning.” - Spencer Weart, Never at War: Why Democracies Will Not Fight One Another (Yale University Press)
The Top Secret American intelligency report NIE 11-3/8-74 "Soviet Forces for Intercontinental Conflict" warned on page 6: "the USSR has largely eliminated previous US quantitative advantages in strategic offensive forces." page 9 of the report estimated that the Russian's ICBM and SLBM launchers exceed the USAs 1,700 during 1970, while Russia's on-line missile throw weight had exceeded the USA's one thousand tons back in 1967! Because the USA had more long-range bombers which can carry high-yield bombs than Russia (bombers are more vulnerable to air defences so were not Russia's priority), it took a little longer for Russia to exceed the USA in equivalent megatons, but the 1976 Top Secret American report NIE 11-3/8-76 at page 17 shows that in 1974 Russia exceeded the 4,000 equivalent-megatons payload of USA missiles and aircraft (with less vulnerability for Russia, since most of Russia's nuclear weapons were on missiles not in SAM-vulnerable aircraft), amd by 1976 Russia could deliver 7,000 tons of payload by missiles compared to just 4,000 tons on the USA side. These reports were kept secret for decades to protect the intelligence sources, but they were based on hard evidence. For example, in August 1974 the Hughes Aircraft Company used a specially designed ship (Glomar Explorer, 618 feet long, developed under a secret CIA contract) to recover nuclear weapons and their secret manuals from a Russian submarine which sank in 16,000 feet of water, while in 1976 America was able to take apart the electronics systems in a state-of-the-art Russian MIG-25 fighter which was flown to Japan by defector Viktor Belenko, discovering that it used exclusively EMP-hard miniature vacuum tubes with no EMP-vulnerable solid state components.
There are four ways of dealing with aggressors: conquest (fight them), intimidation (deter them), fortification (shelter against their attacks; historically used as castles, walled cities and even walled countries in the case of China's 1,100 mile long Great Wall and Hadrian's Wall, while the USA has used the Pacific and Atlantic as successful moats against invasion, at least since Britain invaded Washington D.C. back in 1812), and friendship (which if you are too weak to fight, means appeasing them, as Chamberlain shook hands with Hitler for worthless peace promises). These are not mutually exclusive: you can use combinations. If you are very strong in offensive capability and also have walls to protect you while your back is turned, you can - as Teddy Roosevelt put it (quoting a West African proverb): "Speak softly and carry a big stick." But if you are weak, speaking softly makes you a target, vulnerable to coercion. This is why we don't send troops directly to Ukraine. When elected in 1960, Kennedy introduced "flexible response" to replace Dulles' "massive retaliation", by addressing the need to deter large provocations without being forced to decide between the unwelcome options of "surrender or all-out nuclear war" (Herman Kahn called this flexible response "Type 2 Deterrence"). This was eroded by both Russian civil defense and their emerging superiority in the 1970s: a real missiles and bombers gap emerged in 1972 when the USSR reached and then exceeded the 2,200 of the USA, while in 1974 the USSR achieve parity at 3,500 equivalent megatons (then exceeded the USA), and finally today Russia has over 2,000 dedicated clean enhanced neutron tactical nuclear weapons and we have none (except low-neutron output B61 multipurpose bombs). (Robert Jastrow's 1985 book How to make nuclear Weapons obsolete was the first to have graphs showing the downward trend in nuclear weapon yields created by the development of miniaturized MIRV warheads for missiles and tactical weapons: he shows that the average size of US warheads fell from 3 megatons in 1960 to 200 kilotons in 1980, and from a total of 12,000 megatons in 1960 to 3,000 megatons in 1980.)
The term "equivalent megatons" roughly takes account of the fact that the areas of cratering, blast and radiation damage scale not linearly with energy but as something like the 2/3 power of energy release; but note that close-in cratering scales as a significantly smaller power of energy than 2/3, while blast wind drag displacement of jeeps in open desert scales as a larger power of energy than 2/3. Comparisons of equivalent megatonnage shows, for example, that WWII's 2 megatons of TNT in the form of about 20,000,000 separate conventional 100 kg (0.1 ton) explosives is equivalent to 20,000,000 x (10-7)2/3 = 431 separate 1 megaton explosions! The point is, nuclear weapons are not of a different order of magnitude to conventional warfare, because: (1) devastated areas don't scale in proportion to energy release, (2) the number of nuclear weapons is very much smaller than the number of conventional bombs dropped in conventional war, (3) because of radiation effects like neutrons and intense EMP, it is possible to eliminate physical destruction by nuclear weapons by a combination of weapon design (e.g. very clean bombs like 99.9% fusion Dominic-Housatonic, or 95% fusion Redwing-Navajo) and burst altitude or depth for hard targets, and create a weapon that deters invasions credibly (without lying local fallout radiation hazards), something none of the biased "pacifist disarmament" lobbies (which attract Russian support) tell you, and (4) people at collateral damage distances have time to take cover from radiation and flying glass, blast winds, etc from nuclear explosions (which they don't in Ukraine and Gaza where similar blast pressures arrive more rapidly from smaller conventional explosions). There's a big problem with propaganda here.
(These calculations, showing that even if strategic bombing had worked in WWII - and the US Strategic Bombing Survey concluded it failed, thus the early Cold War effort to develop and test tactical nuclear weapons and train for tactical nuclear war in Nevada field exercises - you need over 400 megaton weapons to give the equivalent of WWII city destruction in Europe and Japan, are often inverted by anti-nuclear bigots to try to obfuscate the truth. What we're driving at is that nuclear weapons give you the ability to DETER the invasions that set off such wars, regardless of whether they escalate from poison gas - as feared in the 20s and 30s thus appeasement and WWII - or nuclear. Escalation was debunked in WWII where the only use of poison gases were in "peaceful" gas chambers, not dropped on cities. Rather than justifying appeasement, the "peaceful" massacre of millions in gas chambers justified war. But evil could and should have been deterred. The "anti-war" propagandarists like Lord Noel-Baker and pals who guaranteed immediate gas knockout blows in the 30s if we didn't appease evil dictators were never held to account and properly debunked by historians after the war, so they converted from gas liars to nuclear liars in the Cold War and went on winning "peace" prices for their lies, which multiplied up over the years, to keep getting news media headlines and Nobel Peace Prizes for starting and sustaining unnecessary wars and massacres by dictators. There's also a military side to this, with Field Marshall's Lord Mountbatten, lord Carver and lord Zuckerman in the 70s arguing for UK nuclear disarmament and a re-introduction of conscription instead. These guys were not pacifist CND thugs who wanted Moscow to rule the world, but they were quoted by them attacking the deterrent but not of course calling for conscription instead. The abolishment of UK conscription for national service in 1960 was due to the H-bomb, and was a political money-saving plot by Macmillan. If we disarmed our nuclear deterrent and spend the money on conscription plus underground shelters, we might well be able to resist Russia as Ukraine does, until we run out of ammunition etc. However, the cheapest and most credible deterrent is tactical nuclear weapons to prevent the concentration of aggressive force by terrorist states..)
Duncan Campbell's War Plan UK relies on the contradiction of claiming that the deliberately exaggerated UK Government worst-case civil defense "exercises" for training purposes are "realistic scenarios" (e.g. 1975 Inside Right, 1978 Scrum Half, 1980 Square Leg, 1982 Hard Rock planning), while simultaneously claiming the very opposite about reliable UK Government nuclear effects and sheltering effectiveness data, and hoping nobody would spot his contradictory tactics. He quotes extensively from these lurid worst-case scenario UK civil defense exercises ,as if they are factually defensible rather than imaginary fiction to put planners under the maximum possible stress (standard UK military policy of “Train hard to fight easy”), while ignoring the far more likely limited nuclear uses scenario of Sir John Hackett's Third World War. His real worry is the 1977 UK Government Training Manual for Scientific Advisers which War Plan UK quotes on p14: "a potential threat to the security of the United Kingdom arising from acts of sabotage by enemy agents, possibly assisted by dissident groups. ... Their aim would be to weaken the national will and ability to fight. ... Their significance should not be underestimated." On the next page, War Plan UK quotes J. B. S. Haldane's 1938 book Air Raid Precautions (ARP) on the terrible destruction Haldane witnessed on unprotected people in the Spanish civil war, without even mentioning that Haldane's point is pro-civil defense, pro-shelters, and anti-appeasement of dictatorship, the exact opposite of War Plan UK which wants Russia to run the world. On page 124 War Plan UK the false assertion is made that USA nuclear casualty data is "widely accepted" and true (declassified Hiroshima casaulty data for people in modern concrete buildings proves it to be lies) while the correct UK nuclear casualty data is "inaccurate", and on page 126, Duncan Campbell simply lies that the UK Government's Domestic Nuclear Shelters- Technical Guidance"ended up offering the public a selection of shelters half of which were invented in the Blitz ... None of the designs was ever tested." In fact, Frank Pavry (who studied similar shelters surviving near ground zero at Hiroshima and Nagasaki in 1945 with the British Mission to Japan_ and George R. Stanbury tested 15 Anderson shelters at the first UK nuclear explosion, Operation Hurricane in 1952, together with concrete structures, and many other improvised trench and earth-covered shelters were nuclear tested by USA and UK at trials in 1955, 1956, 1957, and 1958, and later at simulated nuclear explosions by Cresson Kearny of Oak Ridge National Laboratory in the USA, having also earlier been exposed to early Russian nuclear tests (scroll down to see the evidence of this). Improved versions of war tested and nuclear weapons tested shelters! So war Plan UK makes no effort whatsoever to dig up the facts, and instead falsely claims the exact opposite of the plain unvarnished truth! War Plan UK shows its hypocrisy on page 383 in enthusiastically praising Russian civil defense:
"Training in elementary civil defence is given to everyone, at school, in industry or collective farms. A basic handbook of precautionary measures, Everybody must know this!, is the Russian Protect and Survive. The national civil defence corps is extensive, and is organized along military lines. Over 200,000 civil defence troops would be mobilized for rescue work in war. There are said to be extensive, dispersed and 'untouchable' food stockpiles; industrial workers are issued with kits of personal protection apparatus, said to include nerve gas counteragents such as atropine. Fallout and blast shelters are provided in the cities and in industrial complexes, and new buildings have been required to have shelters since the 1950s. ... They suggest that less than 10% - even as little as 5% - of the Soviet population would die in a major attack. [Less than Russia's loss of 12% of its population in WWII.]"
'LLNL achieved fusion ignition for the first time on Dec. 5, 2022. The second time came on July 30, 2023, when in a controlled fusion experiment, the NIF laser delivered 2.05 MJ of energy to the target, resulting in 3.88 MJ of fusion energy output, the highest yield achieved to date. On Oct. 8, 2023, the NIF laser achieved fusion ignition for the third time with 1.9 MJ of laser energy resulting in 2.4 MJ of fusion energy yield. “We’re on a steep performance curve,” said Jean-Michel Di Nicola, co-program director for the NIF and Photon Science’s Laser Science and Systems Engineering organization. “Increasing laser energy can give us more margin against issues like imperfections in the fuel capsule or asymmetry in the fuel hot spot. Higher laser energy can help achieve a more stable implosion, resulting in higher yields.” ... “The laser itself is capable of higher energy without fundamental changes to the laser,” said NIF operations manager Bruno Van Wonterghem. “It’s all about the control of the damage. Too much energy without proper protection, and your optics blow to pieces.” ' - https://lasers.llnl.gov/news/llnls-nif-delivers-record-laser-energy
NOTE: the "problem" very large lasers "required" to deliver ~2MJ (roughly 0.5 kg of TNT energy) to cause larger fusion explosions of 2mm diameter capsules of frozen D+T inside a 1 cm diameter energy reflecting hohlraum, and the "problem" of damage to the equipment caused by the explosions, is immaterial to clean nuclear deterrent development based on this technology, because in a clean nuclear weapon, whatever laser or other power ignition system is used only has to be fired once, so it needs to be less robust than the NIF lasers which are used repeatedly. Similarly, damage done to the system by the explosion is also immaterial for a clean nuclear weapon, in which the weapon is detonated once only! This is exactly the same point which finally occurred during a critical review of the first gun-type assembly nuclear weapon, in which the fact it would only ever be fired once (unlike a field artillery gun) enabled huge reductions in the size of the device, into a practical weapon, as described by General Leslie M. Groves on p163 of his 1962 book Now it can be told: the story of the Manhattan Project:
"Out of the Review Committee's work came one important technical contribution when Rose pointed out ... that the durability of the gun was quite immaterial to success, since it would be destroyed in the explosion anyway. Self-evident as this seemed once it was mentioned, it had not previously occurred to us. Now we could make drastic reductions in ... weight and size."
This principle also applies to weaponizing NIF clean fusion explosion technology. General Groves' book was reprinted in 1982 with a useful Introduction by Edward Teller on the nature of nuclear weapons history: "History in some ways resembles the relativity principle in science. What is observed depends on the observer. Only when the perspective of the observer is known, can proper corrections be made. ... The general ... very often managed to ignore complexity and arrive at a result which, if not ideal, at least worked. ... For Groves, the Manhattan project seemed a minor assignment, less significant than the construction of the Pentagon. He was deeply disappointed at being given the job of supervising the development of an atomic weapon, since it deprived him of combat duty. ... We must find ways to encourage mutual understanding and significant collaboration between those who defend their nation with their lives and those who can contribute the ideas to make that defense successful. Only by such cooperation can we hope that freedom will survive, that peace will be preserved."
General Groves similarly comments in Chapter 31, "A Final Word" of Now it can be told:
"No man can say what would have been the result if we had not taken the steps ... Yet, one thing seems certain - atomic energy would have been developed somewhere in the world ... I do not believe the United States ever would have undertaken it in time of peace. Most probably, the first developer would have been a power-hungry nation, which would then have dominated the world completely ... it is fortunate indeed for humanity that the initiative in this field was gained and kept by the United States. That we were successful was due entirely to the hard work and dedication of the more than 600,000 Americans who comprised and directly supported the Manhattan Project. ... we had the full backing of our government, combined with the nearly infinite potential of American science, engineering and industry, and an almost unlimited supply of people endowed with ingenuity and determination."
Additionally, the test was made in a hurry before an atmospheric teat ban treaty, and this rushed use of a standard air drop steel casing made the tested weapon much heavier than a properly weaponized Ripple II. The key point is that a 10 kt fission device set off a ~10 Mt fusion explosion, a very clean deterrent. Applying this Ripple II 1,000-factor multiplicative staging figure directly to this technology for clean nuclear warheads, a 0.5 kg TNT D+T fusion capsule would set off a 0.5 ton TNT 2nd stage of LiD, which would then set off a 0.5 kt 3rd stage "neutron bomb", which could then be used to set off a 500 kt 4th stage or "strategic nuclear weapon". In practice, this multiplication factor of 1,000 given by Ripple II in 1962 from 10 kt to 10 Mt may not be immediately achievable to get from ~1 kg TNT yield to 1 ton TNT, so a few more tiny stages may be needed for the lower yield. But there is every reason to forecast that with enough research, improvements will be possible and the device will become a reality. It is therefore now possible not just in "theory" or in principle, but with evidence obtained from practical experimentation, using suitable already-proved technical staging systems used in 1960s nuclear weapon tests successfully, to design 100% clean fusion nuclear warheads! Yes, the details have been worked out, yes the technology has been tested in piecemeal fashion. All that is now needed is a new, but quicker and cheaper, Star Wars program or Manhattan Project style effort to pull the components together. This will constitute a major leap forward in the credibility of the deterrence of aggressors.
ABOVE: as predicted, the higher the input laser pulse for the D+T initiator of a clean multiplicatively-staged nuclear deterrent, the lower the effect of plasma instabilities and asymmetries and the greater the fusion burn. To get ignition (where the x-ray energy injected into the fusion hohlraum by the laser is less than the energy released in the D+T fusion burn) they have had to use about 2 MJ delivered in 10 ns or so, equivalent to 0.5 kg of TNT equivalent. But for deterrent use, why use such expensive, delicate lasers? Why not just use one-shot miniaturised x-ray tubes with megavolt electron acceleration, powered a suitably ramped pulse from a chemical explosion for magnetic flux compression current generation? At 10% efficiency, you need 0.5 x 10 = 5 kg of TNT! Even at 1% efficiency, 50 kg of TNT will do. Once the D+T gas capsule's hohlraum is well over 1 cm in size, to minimise the risk of imperfections that cause asymmetries, you don't any longer need focussed laser beams to enter tiny apertures. You might even be able to integrate many miniature flash x-ray tubes (each designed to burn out when firing one pulse of a MJ or so) into a special hohlraum. Humanity urgently needs a technological arms race akin to Reagan's Star Wars project, to deter the dictators from invasions and WWIII. In the conference video above, a question was asked about the real efficiency of the enormous repeat-pulse capable laser system's efficiency (not required for a nuclear weapon whose components only require the capability to be used once, unlike lab equipment): the answer is that 300 MJ was required by the lab lasers to fire a 2 MJ pulse into the D+T capsule's x-ray hohlraum, i.e. their lasers are only 0.7% efficient! So why bother? We know - from the practical use of incoherent fission primary stage x-rays to compress and ignite fusion capsules in nuclear weapons - that you simply don't need coherent photons from a laser for this purpose. The sole reason they are approaching the problem with lasers is that they began their lab experiments decades ago with microscopic sized fusion capsules and for those you need a tightly focussed beam to insert energy through a tiny hohlraum aperture. But now they are finally achieving success with much larger fusion capsules (to minimise instabilities that caused the early failures), it may be time to change direction. A whole array of false "no-go theorems" can and will be raised by ignorant charlatan "authorities" against any innovation; this is the nature of the political world. There is some interesting discussion of why clean bombs aren't in existence today, basically the idealized theory (which works fine for big H-bombs but ignores small-scale asymmetry problems which are important only at low ignition energy) understimated the input energy required for fusion ignition by a factor of 2000:
In the final diagram above, we illustrate an example of what could very well occur in the near future, just to really poke a stick into the wheels of "orthodoxy" in nuclear weapons design: is it possible to just use a lot of (perhaps hardened for higher currents, perhaps no) pulsed current driven microwave tubes from kitchen microwave ovens, channelling their energy using waveguides (simply metal tubes, i.e. electrical Faraday cages, which reflect and thus contain microwaves) into the hohlraum, and make the pusher of dipole molecules (like common salt, NaCl) which is a good absorber of microwaves (as everybody knows from cooking in microwave ovens)? It would be extremely dangerous, not to mention embarrassing, if this worked, but nobody had done any detailed research into the possibility due to groupthink orthodoxy and conventional boxed in thinking! Remember, the D+T capsule just needs extreme compression and this can be done by any means that works. Microwave technology is now very well-established. It's no good trying to keep anything of this sort "secret" (either officially or unofficially) since as history shows, dictatorships are the places where "crackpot"-sounding ideas (such as douple-primary Project "49" Russian thermonuclear weapon designs, Russian Sputnik satellites, Russian Novichok nerve agent, Nazi V1 cruise missiles, Nazi V2 IRBM's, etc.) can be given priority by loony dictators. We have to avoid, as Edward Teller put it (in his secret commentary debunking Bethe's false history of the H-bomb, written AFTER the Teller-Ulam breakthrough), "too-narrow" thinking (which Teller said was still in force on H-bomb design even then). Fashionable hardened orthodoxy is the soft underbelly of "democracy" (a dictatorship by the majority, which is always too focussed on fashionable ideas and dismissive of alternative approaches in science and technology). Dictatorships (minorities against majorities) have repeatedly demonstrated a lack of concern for the fake "no-go theorems" used by Western anti-nuclear "authorities" to ban anything but fashionable groupthink science.
ABOVE: 1944-dated film of the Head of the British Mission to Los Alamos, neutron discoverer James Chadwick, explaining in detail to American how hard it was for him to discover the neutron, taking 10 years on a shoe-string budget, mostly due to having insufficiently strong sources of alpha particles to bombard nuclei in a cloud chamber! The idea of the neutron came from his colleague Rutherford. Chadwick reads his explanation while rapidly rotating a pencil in his right hand, perhaps indicating the stress he was under in 1944. In 1946, when British participation at Los Alamos ended, Chadwick wrote the first detailed secret British report on the design of a three-stage hydrogen bomb, another project that took over a decade. In the diagram below, it appears that the American Mk17 only had a single secondary stage like the similar yield 1952 Mike design. The point here is that popular misunderstanding of the simple mechanism of x-ray energy transfer for higher yield weapons may be creating a dogmatic attitude even in secret nuclear weaponeer design labs, where orthodoxy is followed too rigorously. The Russians (see quotes on the latest blog post here) state they used two entire two-stage thermonuclear weapons with a combined yield of 1 megaton to set off their 50 megaton test in 1961. If true, you can indeed use two-stage hydrogen bombs as an "effective primary" to set off another secondary stage, of much higher yield. Can this be reversed in the sense of scaling it down so you have several bombs-within-bombs, all triggered by a really tiny first stage? In other words, can it be applied to neutron bomb design?
The 1946 Report of the British Mission to Japan, The Effects of the Atomic Bombs at Hiroshima and Nagasaki, compiled by a team of 16 in Hiroshima and Nagasaki during November 1945, which included 10 UK Home Office civil defence experts (W. N. Thomas, J. Bronowski, D. C. Burn, J. B. Hawker, H. Elder, P. A. Badland, R. W. Bevan, F. H. Pavry, F. Walley, O. C. Young, S. Parthasarathy, A. D. Evans, O. M. Solandt, A. E. Dark, R. G. Whitehead and F. G. S. Mitchell) found: "Para. 26. Reinforced concrete buildings of very heavy construction in Hiroshima, even when within 200 yards of the centre of damage, remained structurally undamaged. ... Para 28. These observations make it plain that reinforced concrete framed buildings can resist a bomb of the same power detonated at these heights, without employing fantastic thicknesses of concrete. ... Para 40. The provision of air raid shelters throughout Japan was much below European standards. ... in Hiroshima ... they were semi-sunk, about 20 feet long, had wooden frames, and 1.5-2 feet of earth cover. ... Exploding so high above them, the bomb damaged none of these shelters. ... Para 42. These observations show that the standard British shelters would have performed well against a bomb of the same power exploded at such a height. Anderson shelters, properly erected and covered, would have given protection. Brick or concrete surfac shelters with adequate reinforcement would have remained safe from collapse. The Morrison shelter is designed only to protect its occupants from the refuge load of a house, and this it would have done. Deep shelters such as the refuge provided by the London Underground would have given complete protection. ... Para 60. Buildings and walls gave complete protection from flashburn."
Glasstone and Dolan's 1977 Effects of Nuclear Weapons in Table 12.21 on p547 flunks making this point by giving data without citing its source to make it credible to readers: it correlated 14% mortality (106 killed out of 775 people in Hiroshima's Telegraph Office) to "moderate damage" at 500m in Hiroshima (the uncited "secret" source was NP-3041, Table 12, applying to unwarned people inside modern concrete buildings).
"A weapon whose basic design would seem to provide the essence of what Western morality has long sought for waging classical battlefield warfare - to keep the war to a struggle between the warriors and exclude the non-combatants and their physical assets - has been violently denounced, precisely because it achieves this objective." - Samuel T. Cohen (quoted in Chapman Pincher, The secret offensive, Sidgwick and Jackson, London, 1985, Chapter 15: The Neutron Bomb Offensive, p210).
The reality is, dedicated enhanced neutron tactical nuclear weapons were used to credibly deter the concentrations of force required for triggering of WWIII during the 1st Cold War, and the thugs who support Russian propaganda for Western disarmament got rid of them on our side, but not on the Russian side. Air burst neutron bombs or even as subsurface earth penetrators of relatively low fission yield (where the soil converts energy that would otherwise escape as blast and radiation into ground shock for destroying buried tunnels - new research on cratering shows that a 20 kt subsurface burst creates similar effects on buried hard targets as a 1 Mt surface burst), they cause none of the vast collateral damage to civilians that we see now in Ukraine and Gaza, or that we saw in WWII and the wars in Korea and Vietnam. This is 100% contrary to CND propaganda which is a mixture of lying on nuclear explosion collateral damage, escalation/knockout blow propaganda (of the type used to start WWII by appeasers) and lying on the designs of nuclear weapons in order to ensure the Western side (but not the thugs) gets only incredible "strategic deterrence" that can't deter the invasions that start world wars (e.g. Belgium in 1914 and Poland in 1939.) "Our country entered into an agreement in Budapest, Hungary when the Soviet Union was breaking up that we would guarantee the independence of Ukraine." - Tom Ramos. There really is phoney nuclear groupthink left agenda politics at work here: credible relatively clean tactical nuclear weapons are banned in the West but stocked by Russia, which has civil defense shelters to make its threats far more credible than ours! We need low-collateral damage enhanced-neutron and earth-penetrator options for the new Western W93 warhead, or we remain vulnerable to aggressive coercion by thugs, and invite invasions. Ambiguity, the current policy ("justifying" secrecy on just what we would do in any scenario) actually encourages experimental provocations by enemies to test what we are prepared to do (if anything), just as it did in 1914 and the 1930s.
ABOVE: 0.2 kt (tactical yield range) Ruth nuclear test debris, with lower 200 feet of the 300 ft steel tower surviving in Nevada, 1953. Note that the yield of the tactical invasion-deterrent Mk54 Davy Crockett was only 0.02 kt, 10 times less than than 0.2 kt Ruth.
It should be noted that cheap and naive "alternatives" to credible deterrence of war were tried in the 1930s and during the Cold War and afterwards, with disastrous consequences. Heavy "peaceful" oil sanctions and other embargoes against Japan for its invasion of China between 1931-7 resulted in the plan for the Pearl Harbor surprise attack of 7 December 1941, with subsequent escalation to incendiary city bombing followed nuclear warfare against Hiroshima and Nagasaki. Attlee's pressure on Truman to guarantee no use of tactical nuclear weapons in the Korean War (leaked straight to Stalin by the Cambridge Spy Ring), led to an escalation of that war causing the total devastation of the cities of that country by conventional bombing (a sight witnessed by Sam Cohen, that motivated his neutron bomb deterrent of invasions), until Eisenhower was elected and reversed Truman's decision, leading not to the "escalatory Armageddon" assertions of Attlee, but to instead to a peaceful armistice! Similarly, as Tom Ramos argues in From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Kennedy's advisers who convinced him to go ahead with the moonlit 17 April 1961 Bay of Pigs invasion of Cuba without any USAF air support, which led to precisely what they claimed they would avoid: an escalation of aggression from Russia in Berlin, with the Berlin Wall going up on 17 August 1961 because any showing weakness to an enemy, as in the bungled invasion of Cuba, is always a green light to dictators to go ahead with revolutions, invasions and provocations everywhere else. Rather than the widely hyped autistic claims from disarmers and appeasers about "weakness bringing peace by demonstrating to the enemy that they have nothing to fear from you", the opposite result always occurs. The paranoid dictator seizes the opportunity to strike first. Similarly, withdrawing from Afghanistan in 2021 was a clear green light to Russia to go ahead with a full scale invasion of Ukraine, reigniting the Cold War. von Neumann and Morgenstein's Minimax theorem for winning games - minimise the maximum possible loss - fails with offensive action in war because it sends a signal of weakness to the enemy, which does not treat war as a game with rules to be obeyed. Minimax is only valid for defense, such as civil defense shelters used by Russia to make their threats more credible than ours. The sad truth is that cheap fixes don't work, no matter how much propaganda is behind them. You either need to militarily defeat the enemy or at least economically defeat them using proven Cold War arms race techniques (not merely ineffective sanctions, which they can bypass by making alliances with Iran, North Korea, and China). Otherwise, you are negotiating peace from a position of weakness, which is called appeasement, or collaboration with terrorism.
"Following the war, the Navy Department was intent to see the effects of an atomic blast on naval warships ... the press was invited to witness this one [Crossroads-Able, 23.5 kt at 520 feet altitude, 1 July 1946, Bikini Atoll]. ... The buildup had been too extravagant. Goats that had been tethered on warship decks were still munching their feed, and the atoll's palm trees remained standing, unscathed. The Bikini test changed public attitudes. Before July 1, the world stood in awe of a weapon that had devastated two cities and forced the Japanese Empire to surrender. After that date, the bomb was still a terrible weapon, but a limited one." - Tom Ramos (LLNL nuclear weaponeer and nuclear pumped X-ray laser developer), From Berkeley to Berlin: How the Rad Lab Helped Prevent Nuclear War, Naval Institute Press, 2022, pp43-4.
ILLUSTRATION: the threat of WWII and the need to deter it was massively derided by popular pacifism which tended to make "jokes" of the Nazi threat until too late (example of 1938 UK fiction on this above; Charlie Chaplin's film "The Great Dictator" is another example), so three years after the Nuremberg Laws and five years after illegal rearmament was begun by the Nazis, in the UK crowds of "pacifists" in Downing Street, London, support friendship with the top racist, dictatorial Nazis in the name of "world peace". The Prime Minister used underhand techniques to try to undermine appeasement critics like Churchill and also later to get W. E. Johns fired from both editorships of Flying (weekly) and Popular Flying (monthly) to make it appear everybody "in the know" agreed with his actions, hence the contrived "popular support" for collaborating with terrorists depicted in these photos. The same thing persists today; the 1920s and 1930s "pacifist" was also driven by "escalation" and "annihilation" claims explosions, fire and WMD poison gas will kill everybody in a "knockout blow", immediately any war breaks out.
"Fuchs reasoned that [the very low energy, 1-10 kev, approximately 10-100 lower energy than medical] x-rays from the [physically separated] uranium explosion would reach the tamper of beryllium oxide, heat it, ionize the constituents and cause them to implode - the 'ionization implosion' concept of von Neumann but now applied to deuterium and tritium contained within beryllium oxide. To keep the radiation inside the tamper, Fuchs proposed to enclose the device inside a casing impervious to radiation. The implosion induced by the radiation would amplify the compression ... and increase the chance of the fusion bomb igniting. The key here is 'separation of the atomic charge and thermonuclear fuel, and compression of the latter by radiation travelling from the former', which constitutes 'radiation implosion'." (This distinction between von Neumann's "ionization implosion" INSIDE the tamper, of denser tamper expanding and thus compressing lower density fusion fuel inside, and Fuchs' OUTSIDE capsule "radiation implosion", is key even today for isentropic H-bomb design; it seems Teller's key breakthroughs were not separate stages or implosion but rather radiation mirrors and ablative recoil shock compression, where radiation is used to ablate a dense pusher of Sausage designs like Mike in 1952 etc., a distinction not to be confused for the 1944 von Neumann and 1946 Fuchs implosion mechanisms!
It appears Russian H-bombs used von Neumann's "ionization implosion" and Fuchs's "radiation implosion" for RDS-37 on 22 November 1955 and also in their double-primary 23 February 1958 test and subsequently, where their fusion capsules reportedly contained a BeO or other low-density outer coating, which would lead to quasi-isentropic compression, more effective for low density secondary stages than purely ablative recoil shock compression. This accounts for the continuing classification of the April 1946 Superbomb Conference (the extract of 32 pages linked here is so severely redacted that it is less helpful than the brief but very lucid summary of its technical content, in the declassified FBI compilation of reports concerning data Klaus Fuchs sent to Stalin, linked here!). Teller had all the knowledge he needed in 1946, but didn't go ahead because he made the stupid error of killing progress off by his own "no-go theorem" against compression of fusion fuel. Teller did a "theoretical" calculation in which he claimed that compression has no effect on the amount of fusion burn because the compressed system is simply scaled down in size so that the same efficiency of fusion burn occurs, albeit faster, and then stops as the fuel thermally expands. This was wrong. Teller discusses the reason for his great error in technical detail during his tape-recorded interview by Chuck Hansen at Los Alamos on 7 June 1993 (C. Hansen, Swords of Armageddon, 2nd ed., pp. II-176-7):
"Now every one of these [fusion] processes varied with the square of density. If you compress the thing, then in one unit's volume, each of the 3 important processes increased by the same factor ... Therefore, compression (seemed to be) useless. Now when ... it seemed clear that we were in trouble, then I wanted very badly to find a way out. And it occurred to be than an unprecedentedly strong compression will just not allow much energy to go into radiation. Therefore, something had to be wrong with my argument and then, you know, within minutes, I knew what must be wrong ... [energy] emission occurs when an electron and a nucleus collide. Absorption does not occur when a light quantum and a nucleus ... or ... electron collide; it occurs when a light quantum finds an electron and a nucleus together ... it does not go with the square of the density, it goes with the cube of the density." (This very costly theoretical error, wasting five years 1946-51, could have been resolved by experimental nuclear testing. There is always a risk of this in theoretical physics, which is why experiments are done to check calculations before prizes are handed out. The ban on nuclear testing is a luddite opposition to technological progress in improving deterrence.)
(This 1946-51 theoretical "no-go theorem" anti-compression error of Teller's, which was contrary to the suggestion of compression at the April 1946 superbomb conference as Teller himself refers to on 14 August 1952, and which was corrected only by comparison of the facts about compression validity in pure fission cores in Feb '51 after Ulam's argument that month for fission core compression by lens focussed primary stage shock waves, did not merely lead to Teller's dismissal of vital compression ideas. It also led to his false equations - exaggerating the cooling effect of radiation emission - causing underestimates of fusion efficiency in all theoretical calculations done of fusion until 1951! For this reason, Teller later repudiated the calculations that allegedly showed his Superbomb would fizzle; he argued that if it had been tested in 1946, the detailed data obtained - regardless of whatever happened - would have at least tested the theory which would have led to rapid progress, because the theory was wrong. The entire basis of the cooling of fusion fuel by radiation leaking out was massively exaggerated until Lawrence Livermore weaponeer John Nuckolls showed that there is a very simple solution: use baffle re-radiated, softened x-rays for isentropic compression of low-density fusion fuel, e.g. very cold 0.3 kev x-rays rather than the usual 1-10 kev cold-warm x-rays emitted directly from the fission primary. Since the radiation losses are proportional to the fourth-power of the x-ray energy or temperature, losses are virtually eliminated, allowing very efficient staging as for Nuckolls' 99.9% 10 Mt clean Ripple II, detonated on 30 October 1962 at Christmas Island. Teller's classical Superbomb was actually analyzed by John C. Solem in a 15 December 1978 report, A modern analysis of Classical Super, LA-07615, according to a Freedom of Information Act request filed by mainstream historian Alex Wellerstein, FOIA 17-00131-H, 12 June 2017; according to a list of FOIA requests at https://www.governmentattic.org/46docs/NNSAfoiaLogs_2016-2020.pdf. However, a google search for the documents Dr Wellerstein requested shows only a few at the US Gov DOE Opennet OSTI database or otherwise online yet e.g. LA-643 by Teller, On the development of Thermonuclear Bombs dated 16 Feb. 1950. The page linked here stating that report was "never classified" is mistaken! One oddity about Teller's anti-compression "no-go theorem" is that the even if fusion rates were independent of density, you would still want compression of fissile material in a secondary stage such as a radiation imploded Alarm Clock, because the whole basis of implosion fission bombs is the benefit of compression; another issue is that even if fusion rates are unaffected by density, inward compression would still help to delay the expansion of the fusion system which leads to cooling and quenching of the fusion burn.)
In fact (see Lawrence Livermore National Laboratory nuclear warhead designer Nuckolls' explanation in report UCRL-74345): "The rates of burn, energy deposition by charged reaction products, and electron-ion heating are proportional to the density, and the inertial confinement time is proportional to the radius. ... The burn efficiency is proportional to the product of the burn rate and the inertial confinement time ...", i.e. the fusion burn rate is directly proportional to the fuel density, which in turn is of course inversely proportional to the cube of its radius. But the inertial confinement time for fusion to occur is proportional to the radius, so the fusion stage efficiency in a nuclear weapon is the product of the burn rate (i.e., 1/radius^3) and time (i.e., radius), so efficiency ~ radius/(radius^3) ~ 1/radius^2. Therefore, for a given fuel temperature, the total fusion burn, or the efficiency of the fusion stage, is inversely proportional to the square of the compressed radius of the fuel! (Those condemning Teller's theoretical errors or "arrogance" should be aware that he pushed hard all the time for experimental nuclear tests of his ideas, to check if they were correct, exactly the right thing to do scientifically and others who read his papers had the opportunity to point out any theoretical errors, but was rebuffed by those in power, who used a series of contrived arguments to deny progress, based upon what Harry would call "subconscious bias", if not arrogant, damning, overt bigotry against the kind of credible, overwhelming deterrence which had proved lacking a decade earlier, leading to WWII. This callousness towards human suffering in war and under dictatorship existed in some UK physicists too: Joseph Rotblat's hatred of anything to deter Russia be it civil defense or tactical neutron bombs of the West - he had no problem smiling and patting Russia's neutron bomb when visiting their labs during cosy groupthink deluded Pugwash campaigns for Russian-style "peaceful collaboration" - came from deep family communist convictions, since his brother was serving in the Red Army in 1944 when he alleged he heard General Groves declare that the bomb must deter Russia! Rotblat stated he left Los Alamos as a result. The actions of these groups are analogous to the "Cambridge Scientists Anti-War Group" in the 1930s. After Truman ordered a H-bomb, Bradbury at Los Alamos had to start a "Family Committee" because Teller had a whole "family" of H-bomb designs, ranging from the biggest, "Daddy", through various "Alarm Clocks", all the way down to small internally-boosted fission tactical weapons. From Teller's perspective, he wasn't putting all eggs in one basket.)
There is more to Fuchs' influence on the UK H-bomb than I go into that paper; Chapman Pincher alleged that Fuchs was treated with special leniency at his trial and later he was given early release in 1959 because of his contributions and help with the UK H-bomb as author of the key Fuchs-von Neumann x-ray compression mechanism patent. For example, Penney visited Fuchs in June 1952 in Stafford Prison; see pp309-310 of Frank Close's 2019 book "Trinity". Close argues that Fuchs gave Penney a vital tutorial on the H-bomb mechanism during that prison visit. That wasn't the last help, either, since the UK Controller for Atomic Energy Sir Freddie Morgan wrote Penney on 9 February 1953 that Fuchs was continuing to help. Another gem: Close gives, on p396, the story of how the FBI became suspicious of Edward Teller, after finding a man of his name teaching at the NY Communist Workers School in 1941 - the wrong Edward Teller, of course - yet Teller's wife was indeed a member of the Communist-front "League of women shoppers" in Washington, DC.
Chapman Pincher, who attended the Fuchs trial, writes about Fuchs hydrogen bomb lectures to prisoners in chapter 19 of his 2014 autobiography, Dangerous to know (Biteback, London, pp217-8): "... Donald Hume ... in prison had become a close friend of Fuchs ... Hume had repaid Fuchs' friendship by organising the smuggling in of new scientific books ... Hume had a mass of notes ... I secured Fuchs's copious notes for a course of 17 lectures ... including how the H-bomb works, which he had given to his fellow prisoners ... My editor agreed to buy Hume's story so long as we could keep the papers as proof of its authenticity ... Fuchs was soon due for release ..."
Chapman Pincher wrote about this as the front page exclusive of the 11 June 1952 Daily Express, "Fuchs: New Sensation", the very month Penney visited Fuchs in prison to receive his H-bomb tutorial! UK media insisted this was evidence that UK security still wasn't really serious about deterring further nuclear spies, and the revelations finally culminated in the allegations that the MI5 chief 1956-65 Roger Hollis was a Russian fellow-traveller (Hollis was descended from Peter the Great, according to his elder brother Chris Hollis' 1958 book Along the Road to Frome) and GRU agent of influence, codenamed "Elli". Pincher's 2014 book, written aged 100, explains that former MI5 agent Peter Wright suspected Hollis was Elli after evidence collected by MI6 agent Stephen de Mowbray was reported to the Cabinet Secretary. Hollis is alleged to have deliberately fiddled his report of interviewing GRU defector Igor Gouzenko on 21 November 1945 in Canada. Gouzenko had exposed the spy and Groucho Marx lookalike Dr Alan Nunn May (photo below), and also a GRU spy in MI5 codenamed Elli, who used only duboks (dead letter boxes), but Gouzenko told Pincher that when Hollis interviewed him in 1945 he wrote up a lengthy false report claiming to discredit many statements by Gouzenko: "I could not understand how Hollis had written so much when he had asked me so little. The report was full of nonsense and lies. As [MI5 agent Patrick] Stewart read the report to me [during the 1972 investigation of Hollis], it became clear that it had been faked to destroy my credibility so that my information about the spy in MI5 called Elli could be ignored. I suspect that Hollis was Elli." (Source: Pincher, 2014, p320.) Christopher Andrew claimed Hollis couldn't have been GRU spy Elli because KGB defector Oleg Gordievsky suggested it was the KGB spy Leo Long (sub-agent of KGB spy Anthony Blunt). However, Gouzenko was GRU, not KGB like Long and Gordievsky! Gordievsky's claim that "Elli" was on the cover of Long's KGB file was debunked by KGB officer Oleg Tsarev, who found that Long's codename was actually Ralph! Another declassified Russian document, from General V. Merkulov to Stalin dated 24 Nov 1945, confirmed Elli was a GRU agent inside british intelligence, whose existence was betrayed by Gouzenko. In Chapter 30 of Dangerous to Know, Pincher related how he was given a Russian suitcase sized microfilm enlarger by 1959 Hollis spying eyewitness Michael J. Butt, doorman for secret communist meetings in London. According to Butt, Hollis delivered documents to Brigitte Kuczynski, younger sister of Klaus Fuchs' original handler, the notorious Sonia aka Ursula. Hollis allegedly provided Minox films to Brigitte discretely when walking through Hyde Park at 8pm after work. Brigitte gave her Russian made Minox film enlarger to Butt to dispose of, but he kept it in his loft as evidence. (Pincher later donated it to King's College.) Other more circumstantial evidence is that Hollis recruited the spy Philby, Hollis secured spy Blunt immunity from prosecution, Hollis cleared Fuchs in 1943, and MI5 allegedly destroyed Hollis' 1945 interrogation report on Gouzenko, to prevent the airing of the scandal that it was fake after checking it with Gouzenko in 1972.
It should be noted that the very small number of Russian GRU illegal agents in the UK and the very small communist party membership had a relatively large influence on nuclear policy via infiltration of unions which had block votes in the Labour Party, as well the indirect CND and "peace movement" lobbies saturating the popular press with anti-civil defence propaganda to make the nuclear deterrent totally incredible for any provocation short of a direct all-out countervalue attack. Under such pressure, UK Prime Minister Harold Wilson's government abolished the UK Civil Defence Corps, making the UK nuclear deterrent totally incredible against major provocations, in March 1968. While there was some opposition to Wilson, it was focussed on his profligate nationalisation policies which were undermining the economy and thus destabilizing military expenditure for national security. Peter Wright’s 1987 book Spycatcher and various other sources, including Daily Mirror editor Hugh Cudlipp's book Walking on Water, documented that on 8 May 1968, the Bank of England's director Cecil King, who was also Chairman of Daily Mirror newspapers, Mirror editor Cudlipp and the UK Ministry of Defence's anti-nuclear Chief Scientific Adviser Sir Solly Zuckerman, met at Lord Mountbatten's house in Kinnerton Street, London, to discuss a coup e'tat to overthrow Wilson and make Mountbatten the UK President, a new position. King's position, according to Cudlipp - quite correctly as revealed by the UK economic crises of the 1970s when the UK was effectively bankrupt - was that Wilson was setting the UK on the road to financial ruin and thus military decay. Zuckerman and Mountbatten refused to take part in a revolution, however Wilson's government was attacked by the Daily Mirror in a front page editorial by Cecil King two days later, on 10 May 1968, headlined "Enough is enough ... Mr Wilson and his Government have lost all credibility, all authority." According to Wilson's secretary Lady Falkender, Wilson was only told of the coup discussions in March 1976.
CND and the UK communist party alternatively tried to claim, in a contradictory way, that they were (a) too small in numbers to have any influence on politics, and (b) they were leading the country towards utopia via unilateral nuclear disarmament saturation propaganda about nuclear weapons annihilation (totally ignoring essential data on different nuclear weapon designs, yields, heights of burst, the "use" of a weapon as a deterrent to PREVENT an invasion of concentrated force, etc.) via the infiltrated BBC and most other media. Critics pointed out that Nazi Party membership in Germany was only 5% when Hitler became dictator in 1933, while in Russia there were only 200,000 Bolsheviks in September 1917, out of 125 million, i.e. 0.16%. Therefore, the whole threat of such dictatorships is a minority seizing power beyond it justifiable numbers, and controlling a majority which has different views. Traditional democracy itself is a dictatorship of the majority (via the ballot box, a popularity contest); minority-dictatorship by contrast is a dictatorship by the fanatically motivated minority by force and fear (coercion) to control the majority. The coercion tactics used by foreign dictators to control the press in free countries are well documented, but never publicised widely. Hitler put pressure on Nazi-critics in the UK "free press" via UK Government appeasers Halifax, Chamberlain and particularly the loathsome UK ambassador to Nazi Germany, Sir Neville Henderson, for example trying to censor or ridicule appeasement critics David Low, to fire Captain W. E. Johns (editor of both Flying and Popular Flying, which had huge circulations and attacked appeasement as a threat to national security in order to reduce rearmament expenditure), and to try to get Winston Churchill deselected. These were all sneaky "back door" pressure-on-publishers tactics, dressed up as efforts to "ease international tensions"! The same occurred during the Cold War, with personal attacks in Scientific American and Bulletin of the Atomic Scientists and by fellow travellers on Herman Kahn, Eugene Wigner, and others who warned we need civil defence to make a deterrent of large provocations credible in the eyes of an aggressor.
Chapman Pincher summarises the vast hypocritical Russian expenditure on anti-Western propaganda against the neutron bomb in Chapter 15, "The Neutron Bomb Offensive" of his 1985 book The Secret Offensive: "Such a device ... carries three major advantages over Hiroshima-type weapons, particularly for civilians caught up in a battle ... against the massed tanks which the Soviet Union would undoubtedly use ... by exploding these warheads some 100 feet or so above the massed tanks, the blast and fire ... would be greatly reduced ... the neutron weapon produces little radioactive fall-out so the long-term danger to civilians would be very much lower ... the weapon was of no value for attacking cities and the avoidance of damage to property can hardly be rated as of interest only to 'capitalists' ... As so often happens, the constant repetition of the lie had its effects on the gullible ... In August 1977, the [Russian] World Peace Council ... declared an international 'Week of action' against the neutron bomb. ... Under this propaganda Carter delayed his decision, in September ... a Sunday service being attended by Carter and his family on 16 October 1977 was disrupted by American demonstrators shouting slogans against the neutron bomb [see the 17 October 1977 Washington Post] ... Lawrence Eagleburger, when US Under Secretary of State for Political Affairs, remarked, 'We consider it probably that the Soviet campaign against the 'neutron bomb cost some $100 million'. ... Even the Politburo must have been surprised at the size of what it could regard as a Fifth Column in almost every country." [Unfortunately, Pincher himself had contributed to the anti-nuclear nonsense in his 1965 novel "Not with a bang" in which small amounts of radioactivity from nuclear fallout combine with medicine to exterminate humanity! The allure of anti-nuclear propaganda extends to all who which to sell "doomsday fiction", not just Russian dictators but mainstream media story tellers in the West. By contrast, Glasstone and Dolan's 1977 Effects of Nuclear Weapons doesn't even mention the neutron bomb, so there was no scientific and technical effort whatsoever by the West to make it a credible deterrent even in the minds of the public it had to protect from WWIII!]
So why on earth doesn't the West take the cheap efficient option of cutting expensive oralloy and maximising cheap natural (mostly lithium-7) LiD in the secondary? Even Glasstone's 1957 Effects of Nuclear Weapons on p17 (para 1.55) states that "Weight for weight ... fusion of deuterium nuclei would produce nearly 3 times as much energy as the fission of uranium or plutonium"! The sad answer is "density"! Natural LiD (containing 7.42% Li6 abundance) is a low density white/grey crystalline solid like salt that actually floats on water (lithium deuteroxide would be formed on exposure to water), since its density is just 820 kg/m^3. Since the ratio of mass of Li6D to Li7D is 8/9, it would be expected that the density of highly enriched 95% Li6D is 739 kg/m^3, while for 36% enriched Li6D it is 793 kg/m^3. Uranium metal has a density of 19,000 kg/m^3, i.e. 25.7 times greater than 95% enriched li6D or 24 times greater than 36% enriched Li6D. Compactness, i.e. volume is more important in a Western MIRV warhead than mass/weight! In the West, it's best to have a tiny-volume, very heavy, very expensive warhead. In Russia, cheapness outweights volume considerations. The Russians in some cases simply allowed their more bulky warheads to protrude from the missile bus (see photo below), or compensated for lower yields at the same volume using clean LiD by using the savings in costs to build more warheads. (The West doubles the fission yield/mass ratio of some warheads by using U235/oralloy pushers in place of U238, which suffers from the problem that about half the neutrons it interacts with result in non-fission capture, as explained below. Note that the 720 kiloton UK nuclear test Orange Herald device contained a hollow shell of 117 kg of U235 surrounded by a what Lorna Arnold's book quotes John Corner referring to a "very thin" layer of high explosive, and was compact, unboosted - the boosted failed to work - and gave 6.2 kt/kg of U235, whereas the first version of the 2-stage W47 Polaris warhead contained 60 kg of U235 which produced most of the secondary stage yield of about 400 kt, i.e. 6.7 kt/kg of U235. Little difference - but because perhaps 50% of the total yield of the W47 was fusion, its efficiency of use of U235 must have actually been less than the Orange Herald device, around 3 kt/kg of U235 which indicates design efficiency limits to "hydrogen bombs"! Yet anti-nuclear charlatans claimed that the Orange Herald bomb was a con!)
ABOVE: USA nuclear weapons data declassified by UK Government in 2010 (the information was originally acquired due to the 1958 UK-USA Act for Cooperation on the Uses of Atomic Energy for Mutual Defense Purposes, in exchange for UK nuclear weapons data) as published at http://nuclear-weapons.info/images/tna-ab16-4675p63.jpg. This single table summarizes all key tactical and strategic nuclear weapons secret results from 1950s testing! (In order to analyze the warhead pusher thicknesses and very basic schematics from this table it is necessary to supplement it with the 1950s warhead design data declassified in other documents, particularly some of the data from Tom Ramos and Chuck Hansen, as quoted in some detail below.) The data on the mass of special nuclear materials in each of the different weapons argues strongly that the entire load of Pu239 and U235 in the 1.1 megaton B28 was in the primary stage, so that weapon could not have had a fissile spark plug in the centre let alone a fissile ablator (unlike Teller's Sausage design of 1951), and so the B28 it appears had no need whatsoever of a beryllium neutron radiation shield to prevent pre-initiation of the secondary stage prior to its compression (on the contrary, such neutron exposure of the lithium deuteride in the secondary stage would be VITAL to produce some tritium in it prior to compression, to spark fusion when it was compressed). Arnold's book indeed explains that UK AWE physicists found the B28 to be an excellent, highly optimised, cheap design, unlike the later W47 which was extremely costly. The masses of U235 and Li6 in the W47 shows the difficulties of trying to maintain efficiency while scaling down the mass of a two-stage warhead for SLBM delivery: much larger quantities of Li6 and U235 must be used to achieve a LOWER yield! To achieve thermonuclear warheads of low mass at sub-megaton yields, both the outer bomb casing and the pusher around the the fusion fuel must be reduced:
"York ... studied the Los Alamos tests in Castle and noted most of the weight in thermonuclear devices was in their massive cases. Get rid of the case .... On June 12, 1953, York had presented a novel concept ... It radically altered the way radiative transport was used to ignite a secondary - and his concept did not require a weighty case ... they had taken the Teller-Ulam concept and turned it on its head ... the collapse time for the new device - that is, the amount of time it took for an atomic blast to compress the secondary - was favorable compared to older ones tested in Castle. Brown ... gave a female name to the new device, calling it the Linda." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp137-8. (So if you reduce the outer casing thickness to reduce warhead weight, you must complete the pusher ablation/compression faster, before the thinner outer casing is blown off, and stops reflecting/channelling x-rays on the secondary stage. Making the radiation channel smaller and ablative pusher thinner helps to speed up the process. Because the ablative pusher is thinner, there is relatively less blown-off debris to block the narrower radiation channel before the burn ends.)
"Brown's third warhead, the Flute, brought the Linda concept down to a smaller size. The Linda had done away with a lot of material in a standard thermonuclear warhead. Now the Flute tested how well designers could take the Linda's conceptual design to substantially reduce not only the weight but also the size of a thermonuclear warhead. ... The Flute's small size - it was the smallest thermonuclear device yet tested - became an incentive to improve codes. Characteristics marginally important in a larger device were now crucially important. For instance, the reduced size of the Flute's radiation channel could cause it to close early [with ablation blow-off debris], which would prematurely shut off the radiation flow. The code had to accurately predict if such a disaster would occur before the device was even tested ... the calculations showed changes had to be made from the Linda's design for the Flute to perform correctly." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp153-4. Note that the piccolo (the W47 secondary) is a half-sized flute, so it appears that the W47's secondary stage design miniaturization history was: Linda -> Flute -> Piccolo:
"A Division's third challenge was a small thermonuclear warhead for Polaris [the nuclear SLBM submarine that preceeded today's Trident system]. The starting point was the Flute, that revolutionary secondary that had performed so well the previous year. Its successor was called the Piccolo. For Plumbbob [Nevada, 1957], the design team tested three variations of the Piccolo as a parameter test. One of the variants outperformed the others ... which set the stage for the Hardtack [Nevada and Pacific, 1958] tests. Three additional variations for the Piccolo ... were tested then, and again an optimum candidate was selected. ... Human intuition as well as computer calculations played crucial roles ... Finally, a revolutionary device was completed and tested ... the Navy now had a viable warhead for its Polaris missile. From the time Brown gave Haussmann the assignment to develop this secondary until the time they tested the device in the Pacific, only 90 days had passed. As a parallel to the Robin atomic device, this secondary for Polaris laid the foundation for modern thermonuclear weapons in the United States." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp177-8. (Ramos is very useful in explaining that many of the 1950s weapons with complex non-spherical, non-cylindrical shaped primaries and secondaries were simply far too complex to fully simulate on the really pathetic computers they had - Livermore got a 4,000 vacuum tubes-based IBM 701 with 2 kB memory in 1956, AWRE Aldermaston in the Uk had to wait another year for theirs - so they instead did huge numbers of experimental explosive tests. For instance, on p173, Ramos discloses that the Swan primary which developed into the 155mm tactical shell, "went through over 100 hydrotests", non-nuclear tests in which fissile material is replaced with U238 or other substitutes, and the implosion is filmed with flash x-ray camera systems.)
"An integral feature of the W47, from the very start of the program, was the use of an enriched uranium-235 pusher around the cylindrical secondary." - Chuck Hansen, Swords 2.0, p. VI-375 (Hansen's source is his own notes taken during a 19-21 February 1992 nuclear weapons history conference he attended; if you remember the context, "Nuclear Glasnost" became fashionable after the Cold War ended, enabling Hansen to acquire almost unredacted historical materials for a few years until nuclear proliferation became a concern in Iraq, Afghanistan, Iran and North Korea). The key test of the original (Robin primary and Piccolo secondary) Livermore W47 was 412 kt Hardtack-Redwood on 28 June 1958. Since Li6D utilized at 100% efficiency would yield 66 kt/kg, the W47 fusion efficiency was only about 6%; since 100% fission of u235 yields 17 kt/kg, the W47's Piccolo fission (the u235 pusher) efficiency was about 20%; the comparable figures for secondary stage fission and fusion fuel burn efficiencies in the heavy B28 are about 7% and 15%, respectively:
ABOVE: the heavy B28 gave a very "big bang for the buck": it was cheap in terms of expensive Pu, U235 and Li6, and this was the sort of deterrent which was wanted by General LeMay for the USAF, which wanted as many weapons as possible, within the context of Eisenhower's budgetary concerns. But its weight (not its physical size) made it unsuitable for SLBM Polaris warheads. The first SLBM warhead, the W47, was almost the same size as the B28 weapon package, but much lighter due to having a much thinner "pusher" on the secondary, and casing. But this came at a large financial cost in terms of the quantities of special nuclear materials required to get such a lightweight design to work, and also a large loss of total yield. The fusion fuel burn efficiency ranges from 6% for the 400 kt W47 to 15% for the 1.1 megaton B28 (note that for very heavy cased 11-15 megaton yield tests at Castle, up to 40% fusion fuel burn efficiency was achieved), whereas the secondary stage ablative pusher fission efficiency ranged from 7% for a 1.1 inch thick natural uranium (99.3% U238) ablator to 20% for a 0.15 inch thick highly enriched oralloy (U235) ablator. From the brief description of the design evolution given by Dr Tom Ramos (Lawrence Livermore National Laboratory), it appears that when the x-ray channelling outer case thickness of the weapon is reduced to save weight, the duration of the x-ray coupling is reduced, so the dense metal pusher thickness must be reduced if the same compression factor (approximately 20) for the secondary stage is to be accomplished (lithium deuteride, being of low density, is far more compressable by a given pressure, than dense metal). In both examples, the secondary stage is physically a boosted fission stage. (If you are wondering why the hell the designers don't simply use a hollow core U235 bomb like Orange Herald instead of bothering with such inefficient x-ray coupled two-stage designs as these, the answer is straightforward: the risk of large fissile core meltdown by neutrons Moscow ABM defensive nuclear warheads, neutron bombs.)
The overall weight of the W47 was minimized by replacing the usual thick layer of U238 pusher with a very thin layer of fissile U235 (supposedly Teller's suggestion), which is more efficient for fission, but is limited by critical mass issues. The W47 used a 95% enriched Li6D cylinder with a 3.8mm thick U235 pusher; the B28 secondary was 36% enriched Li6D, with a very heavy 3cm thick U238 pusher. As shown below, it appears the B28 was related to the Los Alamos clean design of the TX21C tested as 95% clean 4.5 megatons Redwing-Navajo in 1956 and did not have a central fissile spark plug. From the declassified fallout composition, it is known the Los Alamos designers replaced the outer U238 pusher of Castle secondaries with lead in Navajo. Livermore did the same for their 85% clean 3.53 megatons Redwing-Zuni test, but Livermore left the central fission spark plug, which contributed 10% of its 15% fission yield, instead of removing the neutron shield, using foam channel filler for slowing down the x-ray compression, and thereby using primary stage neutrons to split lithium-6 giving tritium prior to compression. Our point is that Los Alamos got it wrong in sticking too conservatively to ideology: for clean weapons they should have got rid of the dense lead pusher and gone for John H. Nuckolls idea (also used by Fuchs in 1946 and the Russians in 1955 and 1958) of a low-density pusher for isentropic compression of low-density fusion fuel. This error is the reason why those early cleaner weapons were extremely heavy due to unnecessary 2" thick lead or tungsten pushers around the fusion fuel, which massively reduced their yield-to-weight ratios, so that LeMay rejected them!
This is justified by the data given for a total U238 capture-to-fission ratio of 1 in the 11 megaton Romeo test and also the cross-sections for U235 capture and fission on the AWE graph for relevant neutron energy range of about 1-14 Mev. If half the neutrons are captured in U238 without fission, then the maximum fission yield you can possibly get from "x" kg of U238 pusher is HALF the energy obtained from 100% fission of "x" kg of U238. Since with U238 only about half the atoms can undergo fission by thermonuclear neutrons (because the other half undergo non-fission capture), the energy density (i.e., the Joules/kg produced by the fission explosion of the pusher) reached by an exploding U238 pusher is only half that reached by U235 (in which there is less non-fission capture of neutrons, which doubles the pusher mass without doubling the fission energy release). So a U235 pusher will reach twice the temperature of a U238 pusher, doubling its material heating of fusion fuel within, prolonging the fusion burn and thus increasing fusion burn efficiency. 10 MeV neutron energy is important since it allows for likely average scattering of 14.1 MeV D+T fusion neutrons and it is also the energy at which the most important capture reaction, the (n,2n) cross-section peaks for both U235 (peak of 0.88 barn at 10 Mev) and U238 (peak of 1.4 barns at 10 Mev). For 10 Mev neutrons, U235 and U238 have fission cross-sections of 1.8 and 1 barn, respectively. For 14 Mev neutrons, U238 has a (n,2n) cross section of 0.97 barn for U237 production. So ignoring non-fission captures, you need 1.8/1 = 1.8 times greater thickness of pusher for U238 than for U235, to achieve the same amount of fission. But this simple consideration ignores the x-ray ablation requirement of the explosing pusher, so there are several factors requiring detailed computer calculations, and/or nuclear testing.
Note: there is an extensive collection of declassified documents released after Chuck Hansen's final edition, Swords 2.0, which are now available at https://web.archive.org/web/*/http://www.nnsa.energy.gov/sites/default/files/nnsa/foiareadingroom/*, being an internet-archive back-up of a now-removed US Government Freedom of Information Act Reading Room. Unfortunately they were only identified by number sequence, not by report title or content, in that reeding room, and so failed to achieve wide attention when originally released! (This includes extensive "Family Committee" H-bomb documentation and many long-delayed FOIA requests submitted originally by Hansen, but not released in time for inclusion in Swords 2.0.) As the extract below - from declassified document RR00132 - shows, some declassified documents contained very detailed information or typewriter spaces that could only be filled by a single specific secret word (in this example, details of the W48 linear implosion tactical nuclear warhead, including the fact that it used PBX9404 plastic bonded explosive glued to the brittle beryllium neutron reflector around the plutonium core using Adiprene L100 adhesive!).
ABOVE: Declassified data on the radiation flow analysis for the 10 megaton Mike sausage: http://nnsa.energy.gov/sites/default/files/nnsa/foiareadingroom/RR00198.pdf
Note that the simplistic "no-go theorem" given in this extract, against any effect from varying the temperature to help the radiation channelling, was later proved false by John H. Nuckolls (like Teller's anti-compression "no-go theorem" was later proved false), since lowered temperature delivers energy where it is needed while massively reducing radiation losses (which go as the fourth power of temperature/x-ray energy in kev).
Russian propagandists are discussing the best way to scare the West - testing a nuclear Tsar Bomb or checking bomb shelters.
pic.twitter.com/qWCaxjvfM8
ABOVE secret reports on Australian-British nuclear test operations at Maralinga in 1956 and 1957, Buffalo and Antler, proved that even at 10 psi peak overpressure for the 15 kt Buffalo-1 shot, the dummy lying prone facing the blast was hardly moved due to the low cross-sectional area exposed to the blast winds, relative to standing dummies which were severely displaced and damaged. The value of trenches in protecting personnel against blast winds and radiation was also proved in tests (gamma radiation shielding of trenches had been proved at an earlier nuclear test in Australia, Operation Hurricane in 1952). (Antler report linked here; Buffalo report linked here.) This debunks the US Department of Defense models claiming that people will automatically be blown out of the upper floors of modern city buildings at very low pressures, and killed by the gravitational impact with the pavement below! In reality, tall buildings mutually shield one another from the blast winds, not to mention the radiation (proven in the latest post on this blog), and on seeing the flash most people will have time to lie down on typical surfaces like carpet which give a frictional resistance to displacement, ignored in fiddled models which assume surfaces have less friction than a skating rink; all of this was omitted from the American 1977 Glasstone and Dolan book "The Effects of Nuclear Weapons". As Tuck's paper below on the gamma radiation dose rate measurements on ships at Operation Crossroads, July 1946 nuclear tests proved, contrary to Glasstone and Dolan, scattered radiation contributions are small, so buildings or ships gun turrets provided excellent radiation "shadows" to protect personnel. This effect was then calculated by UK civil defence weapons effects expert Edward Leader-Williams in his paper presented at the UK's secret London Royal Society Symposium on the Physical Effects of Atomic Weapons, but the nuclear test data as always was excluded from the American Glasstone book published the next year, The Effects of Atomic Weapons in deference to lies about the effects in Hiroshima, including an "average" casualty curve which deliberately obfuscated huge differences in survival rates in different types of buildings and shelters, or simply in shadows!
Note: the DELFIC, SIMFIC and other computer predicted fallout area comparisons for the 110 kt Bikini Atoll Castle-Koon land surface burst nuclear test are false since the distance scale of Bikini Atoll is massively exaggerated on many maps, e.g. in the Secret January 1955 AFSWP "Fall-out Symposium", the Castle fallout report WT-915, and the fallout patterns compendium DASA-1251! The Western side of the Bikini Atoll reef is at 165.2 degrees East, while the most eastern island in the Bikini Atoll, Enyu, is at 165.567 degrees East: since there are 60 nautical miles per degree by definition, the width of Bikini Atoll is therefore (165.567-165.2)(60) = 22 nautical miles, approximately half the distance shown in the Castle-Koon fallout patterns. Since area is proportional to the square of the distance scale, this constitutes a serious exaggeration in fallout casualty calculations, before you get into the issue of the low energy (0.1-0.2 MeV) gamma rays from neutron induced Np239 and U237 in the fallout enhancing the protection factor of shelters (usually calculated assuming hard 1.17 and 1.33 MeV gamma rads from Co60), during the sheltering period of approximately 1-14 days after detonation.
"Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr Dulles calls 'brinkmanship'. This is a policy adopted from a sport ... called 'Chicken!' ... If one side is unwilling to risk global war, while the other side is willing to risk it, the side which is willing to run the risk will be victorious in all negotiations and will ultimately reduce the other side to complete impotence. 'Perhaps' - so the practical politician will argue - 'it might be ideally wise for the sane party to yield to the insane party in view of the dreadful nature of the alternative, but, whether wise or not, no proud nation will long acquiesce in such an ignominious role. We are, therefore, faced, quite inevitably, with the choice between brinkmanship and surrender." - Bertrand Russell, Common Sense and Nuclear Warfare, George Allen and Unwin, London, 1959, pp30-31.
Emphasis added. Note that Russell accepts lying about nuclear weapons just as gas weapons had been lied about in the 1920s-30s by "arms controllers" to start WWII, then he simply falls into the 1930s Cambridge Scientists Antiwar Group delusional propaganda fraud of assuming that any attempt to credibly deter fascism is immoral because it will automatically result in escalatory retaliation with Herman Goering's Luftwaffe drenching London with "overkill" by poison gas WMDs etc. In particular, he forgets that general disarmament pursued in the West until 1935 - when Baldwin suddenly announced that the Nazis had secretly produced a massive, unstoppable warmachine in two years - encouraged aggressors to first secretly rearm, then coerce and invade their neighbours while signing peace promises purely to buy more time for rearmament, until a world war resulted. Not exactly a great result for disarmament propaganda. So after obliterating what Reagan used to call (to the horror of commie "historians") the "true facts of history" from his mind, he advocates some compromise with the aggressors of the 30 September 1938 Munich Agreement peace-in-our-time sort, the historically proved sure fire way to really escalate a crisis into a major war by showing the green lamp to a loon to popular media acclaim and applause for a fairy tale utopian fantasy; just as the "principled" weak, rushed, imbecile withdrawl from Afghanistan in 2021 encouraged Putin to invade Ukraine in 2022, and also the green lamp for Hamas to invade Israel in 2023.
"... deterrence ... consists of threatening the enemy with thermonuclear retaliation should he act provocatively. ... If war is 'impossible', how can one threaten a possible aggressor with war? ... The danger, evoked by numerous critics, that such research will result in a sort of resigned expectation of the holocaust, seems a weak argument ... The classic theory of Clausewitz defines absolute victory in terms of disarmament of the enemy ... Today ... it will suffice to take away his means of retaliation to hold him at your mercy." - Raymond Aron, Introduction to Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 9-12. (This is the commie support for arms control and disarmament has achieved, precisely the weakening of the West to take away credible deterrence.)
"75 years ago, white slavery was rampant in England. ... it could not be talked about openly in Victorian England, moral standards as to the subjects of discussion made it difficult to arouse the community to necessary action. ... Victorian standards, besides perpetuating the white slave trade, intensified the damage ... Social inhibitions which reinforce natural tendencies to avoid thinking about unpleasant subjects are hardly uncommon. ... But when our reluctance to consider danger brings danger nearer, repression has gone too far. In 1960, I published a book that attempted to direct attention to the possibility of a thermonuclear war ... people are willing to argue that it is immoral to think and even more immoral to write in detail about having to fight ... like those ancient kings who punished messengers who brought them bad news. That did not change the news; it simply slowed up its delivery. On occasion it meant that the kings were ill informed and, lacking truth, made serious errors in judgement and strategy. ... We cannot wish them away. Nor should we overestimate and assume the worst is inevitable. This leads only to defeatism, inadequate preparations (because they seem useless), and pressures toward either preventative war or undue accommodation." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 17-19. (In the footnote on page 35, Kahn notes that original nuclear bullshitter, the 1950 creator of fake cobalt-60 doomsday bomb propaganda, Leo Szilard, was in the usual physics groupthink nutters club: "Szilard is probably being too respectful of his scientific colleagues who also seem to indulge in ad hominem arguments - especially when they are out of their technical specialty.")
"Ever since the catastropic and disillusioning experience of 1914-18, war has been unthinkable to most people in the West ... In December 1938, only 3 months after Munich, Lloyd's of London gave odds of 32 to 1 that there would be no war in 1939. On August 7, 1939, the London Daily Express reported the result of a poll of its European reporters. 10 out of 12 said, 'No war this year'. Hitler invaded Poland 3 weeks later." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, p. 39. (But as the invasion of Ukraine in 2022 proved, even the label "war" is now "controversial": the aggressor now simply declares they are on a special operation of unifying people under one flag to ensure peace! So the reason why there is war in Ukraine is that Ukraine is resisting. If it waved a white flag, as the entire arms control and disarmament lobby insists is the only sane response to a nuclear-armed aggressor, there would be "peace," albeit on Russia's terms: that's why they disarmed Ukraine in 1994. "Peace propaganda" of "disarmers"! Free decent people prefer to fight tyranny. But as Kahn states on pp. 7-9:
"Some, most notably [CND's pseudo-historian of arms race lying] A. J. P. Taylor, have even said that Hitler was not like Hitler, that further appeasement [not an all-out arms race as was needed but repeatedly rejected by Baldwin and Chamberlain until far too late; see discussion of this fact which is still deliberately ignored or onfuscated by "historians" of the A. J. P. Taylor biased anti-deterrence left wing type, in Slessor's The Central Blue, quoted on this blog] would have prevented World War II ... If someone says to you, 'One of us has to be reasonable and it is not going to be me, so it has to be you', he has a very effective bargaining advantage, particularly if he is armed with thermonuclear bombs [and you have damn all civil defense, ABM, or credible tactical deterrent]. If he can convince you he is stark, staring mad and if he has enough destructive power ... deterrence alone will not work. You must then give in or accept the possibility of being annihilated ... in the first instance if we fight and lose; in the second if we capitulate without fighting. ... We could still resist by other means ranging from passive resistance of the Gandhi type to the use of underground fighting and sabotage. All of these alternatives might be of doubtful effectiveness against [the Gulag system, KGB/FSB torture camps or Siberian salt mines of] a ruthless dictatorship."
Sometimes people complain that Hitler and the most destructive and costly war and only nuclear war of history, WWII, is given undue attention. But WWII is a good analogy to the danger precisely because of the lying WMD gas war propaganda-based disarmament of the West which allowed the war, because of the attacks by Hitler's fans on civil defense in the West to make even the token rearmament after 1935 ineffective as a credible deterrent, and because Hitler has mirrors in Alexander the Great, Attila the Hun, Ghengis Khan, Tamerlane, Napoleon and Stalin. Kahn explains on p. 173: "Because history has a way of being more imaginative and complex than even the most imaginative and intelligent analysts, historical examples often provide better scenarios than artificial ones, even though they may be no more directly applicable to current equipment, postures, and political situations than the fictional plot of the scenario. Recent history can be especially useful.")
"One type of war resulting at least partly from deliberate calculation could occur in the process of escalation. For example, suppose the Soviets attacked Europe, relying upon our fear of their reprisal to deter a strategic attack by us; we might be deterred enough to pause, but we might evacuate our cities during this pause in the hope we could thereby convince the Soviets we meant business. If the Soviets did not back down, but continued their attack upon Europe, we might decide that we would be less badly off if we proceeded ... The damage we would receive in return would then be considerably reduced, compared with what we would have suffered had we not evacuated. We might well decide at such a time that we would be better off to attack the Soviets and accept a retalitory blow at our dispersed population, rather than let Europe be occupied, and so be forced to accept the penalty of living in the hostile and dangerous world that would follow." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 51-2.
"We must recognise that the stability we want in a system is more than just stability against accidental war or even against an attack by the enemy. We also want stability against extreme provocation [e.g. invasion of allies, which then escalates as per invasion of Belgium 1914, or Poland 1939]." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, p. 53(footnote).
Note: this 1962 book should not be confused with Kahn's 1984 "updated" Thinking About the Unthinkable in the 1980s, which omits the best material in the 1962 edition (in the same way that the 1977 edition of The Effects of Nuclear Weapons omits the entire civil defense chapter which was the one decent thing in the 1957 and 1962/4 editions!) and thus shows a reversion to the less readable and less helpful style of his 1960 On Thermonuclear War, which severely fragmented and jumbled up all the key arguments making it easy for critics to misquote or quote out of context. For example, Kahn's 1984 "updated" book starts on the first page of the first chapter with the correct assertion that Johnathan Schell's Fate of the Earth is nonsense, but doesn't say why it's nonsense, and you have to read through to the final chapter - pages 207-8 of chapter 10 - to find Kahn writing in the most vague way possible, without a single specific example, that Schell is wrong because of "substantive inadequacies and inaccuracies", without listing a single example such as Schell's lying that the 1954 Bravo nuclear test blinded everyone well beyond the range of Rongelap, and that it was impossible to easily shield the radiation from the fallout or evacuate the area until it decays, which Schell falsely attributed to Glasstone and Dolan's nonsense in the 1977 Effects of Nuclear Weapons! Kahn eventually in the footnote on page 208 refers readers to an out-of-print article for facts: "These criticisms are elaborated in my review of The Fate of the Earth, see 'Refusing to Think About the Unthinkable', Fortune, June 28, 1982, pp. 113-6. Kahn does the same for civil defense in the 1984 book, referring in such general, imprecise and vague terms to Russian civil defence, with no specific data, that it is a waste of time, apart possibly one half-baked sentence on page 177: "Variations in the total megatonnage, somewhat surprisingly, do not seem to affect the toll nearly as much as variations in the targetting or the type of weapon bursts." Kahn on page 71 quotes an exchange between himself and Senator Proxmire during the US Congressional Hearings of the Joint Committee on Defense Production, Civil preparedness and limited nuclear war where on page 55 of the hearings, Senator Proxmire alleges America would escalate a limited conflict to an all-out war because: "The strategic value and military value of destroying cities in the Soviet Union would be very great." Kahn responded: "No American President is likely to do that, no matter what the provocation." Nuclear war will be limited, according to Herman Kahn's analysis, despite the bullshit fron nutters to the contrary.
Kahn on page 101 of Thinking About the Unthinkable in the 1980s correctly and accurately condemns President Carter's 1979 State of the Union Address, which claimed falsely that just a single American nuclear submarine is required by America and has an "overwhelming" deterrent against "every large and medium-sized city in the Soviet Union". Carter ignored Russian retaliation on cities if you bomb theirs: America has avoided the intense Russian protection efforts that make the Russian nuclear threat credible, namely civil defense shelters and evacuation plans, and also the realpolitik of deterrence of world wars, which so far have only been triggered due to invasions of third parties (Belgium '14, Poland '39). Did America strategically nuke every city in Russia when it invaded Ukraine in 2022? No, debunking Proxmire and the entire Western pro-Russian "automatic escalation" propaganda lobby, and it didn't even have tactical neutron bombs to help deter the Russians like Reagan in the 1980s, because in the 1990s America had ignored Kahn's argument, and went in for MINIMAL deterrence of the least credible sort (abolishing the invasion-deterring dedicated neutron tactical nuclear stockpile entirely; the following quotation is from p101 of Kahn's Thinking About the Unthinkable in the 1980s):
"Minimum deterrence, or any predicated on an escessive emphasis on the inevitably of mutual homocide, is both misleading and dangerous. ... MAD principles can promote provocation - e.g. Munich-type blackmail on an ally. Hitler, for example, did not threaten to attack France or England - only Austria, Czechoslovakia, and Poland. It was the French and the British who finally had to threaten all-out war [they could only do this after rearmament and building shelters and gas masks to reduce the risk of reprisals in city bombing, which gave more time for Germany to prepare since it was rearming faster than France and Britain which still desperately counted on appeasement and peace treaties and feared provoking a war by an arms-race due to endless lying propaganda from Lord Grey that his failure to deter war in 1914 had been due to an arms-race rather than the incompetence of the procrastination of his anti-war Liberal Party colleagues in the Cabinet] - a move they would not and could not have made if the notion of a balance of terror between themselves and Germany had been completely accepted. As it was, the British and French were most reluctant to go to war; from 1933 to 1939 Hitler exploited that reluctance. Both nations [France and Britain] were terrified by the so-called 'knockout blow', a German maneuver that would blanket their capitals with poison gas ... The paralyzing effect of this fear prevented them from going to war ... and gave the Germans the freedom to march into the Ruhr, to form the Anschluss with Austria, to force the humiliating Munich appeasement (with the justification of 'peace in our time'), and to take other aggressive actions [e.g. against the Jews in the Nuremberg Laws, Kristallnacht, etc.] ... If the USSR were sufficiently prepared in the event a war did occur, only the capitalists would be destroyed. The Soviets would survive ... that would more than justify whatever sacrifice and destruction had taken place.
"This view seems to prevail in the Soviet military and the Politburo even to the present day. It is almost certain, despite several public denials, that Soviet military preparations are based on war-fighting, rather than on deterrence-only concepts and doctrines..." - Herman Kahn, Thinking About the Unthinkable in the 1980s, 1984, pages 101-102.
Kahn adds, in his footnote on p111, that "Richard Betts has documented numerous historical cases in which attackers weakened their opponents defenses through the employment of unanticipated tactics. These include: rapid changes in tactics per se, false alarms and fluctuating preparations for war ... doctrinal innovations to gain surprise. ... This is exactly the kind of thing which is likely to surprise those who subscribe to MAD theories. Those who see a need for war-fighting capabilities expect the other side to try to be creative and use tactical innovations such as coercion and blackmail, technological surprises, or clever tactics on 'leverage' targets, such as command and control installations. If he is to adhere to a total reliance on MAD, the MADvocate has to ignore these possibilities." See Richard Betts, "Surprise Despite Warning: Why Sudden Attacks Succeed", Political Science Quarterly, Winter 1980-81, pp. 551-572.)
Compare two situations: (1) Putin explodes a 50 megaton nuclear "test" of the warhead for his new nuclear reactor powered torpedo, Poseidon, a revamped 1961 Tsar Bomba, or detonates a high-altitude nuclear EMP "test" over neutral waters but within the thousands of miles range of USA or UK territory; (2) Putin invades Poland using purely conventional weapons. Our point here is that both nuclear AND conventional weapons trigger nuclear threats and the risk of nuclear escalation, as indeed they have done (for Putin's nuclear threats scroll down to videos with translations below). So the fashionable CND style concept that only nuclear weapons can trigger nuclear escalation is bullshit, and is designed to help Russia start and win WWIII to produce a world government, by getting us to undertake further unilateral (not multilateral) disarmament, just as evolved in the 1930s, setting the scene for WWII. Japan for example did not have nuclear weapons in August 1945, yet triggered not just tactical nuclear war (both cities had some military bases and munitions factories, as well as enormous numbers of civilians), and the decision to attack cities rather than just "test" weapons obove Tokyo bay as Teller demanded but Oppenheimer rejected (for maximum impact with a very small supply of nuclear weapons) showed some strategic nuclear war thinking. Truman was escalating to try to shock Japan into rapid surrender emotionally (many cities in Japan had already been burned out in conventional incendiary air raids, and the two nuclear attacks while horrible for civilians in those cities contributed only a fraction of the millions killed in WWII, despite anti-nuclear propaganda lies to the contrary). Truman's approach escalating to win is the opposite of the "Minimax game theory" (von Neumann's maths and Thomas Schelling's propaganda) gradual escalation approach that's currently the basis of nuclear deterrence planning despite its failure wherever it has been tried (Vietnam, Afghanistan, etc). Gradual escalation is supposed to minimise the maximum possible risk (hence "minimax" name), but it guarantees failure in the real world (unlike rule abided games) by maximising the build up of resentment. E.g. Schelling/Minimax say that if you gradually napalm civilians day after day (because they are the unprotected human shields used by terrorists/insurgents; the Vietcong are hiding in underground tunnels, exactly like Hamas today, and the Putin regime's metro 2 shelter tunnels under Russia) you somehow "punish the enemy" (although they don't give a toss about the lives of kids which is why you're fighting them!) and force them to negotiate for peace in good faith, then you can pose for photos with them sharing a glass of champagne and there is "world peace". That's a popular fairy tale, like Marxist mythology.
Once you grasp this fact, that nuclear weapons have been and will again be "used" explosively without automatic escalation, for example provocative testing as per the 1961 Russian 50 megaton bomb test, or the 1962 high altitude EMP bursts, you should be able to grasp the fact that the "escalation" deception used to dismiss civil defense and tactical nuclear deterrence against limited nuclear war, is fake news from Russian fellow-travellers like Corbyn. Once you assign a non-unity probability to "escalation", you're into conventional war territory: if you fight a conventional war, it can "escalate" to nuclear war as on 6 August 1945. Japan did not avoid nuclear attack by not having nuclear weapons on 6 August 1945. If it had nuclear weapons ready to be delivered, a very persuasive argument could be made that unless Truman wanted to invite retaliation, World War II would have remained strategically non-nuclear: no net strategic advantage would have been achieved by nuclear city bombing so only war-ending tactical nuclear threats could have prevailed in practice. But try explaining this to the groupthink pseudosocialist bigoted mass murderers who permeate fake physics with crap; it's no easier to explain to them the origins of particle masses or even dark energy/gravitation; in both cases groupthink lying hogwash persists because statements of proved facts are hated and rejected if them debunk religious style fairy tales the mass media loves. There were plenty of people warning that mass media gas war fear mongering was disguised Nazi supporting propaganda in the 1930s, but the public listened to that crap then just as it accepted the "eugenics" (anti-diversity evolution crap of Sir Galton, cousin of Darwin) basis for Hitler's Mein Kampf without question, just as they accepted the lying propaganda from the UK "Cambridge Scientists Anti-War Group" which like CND and all other arms control and disarmament lobbies supporting terrorist states today, did more than even Hitler to deliberately lay the foundations for the Holocaust and World War II, while never being criticised in the UK media! Thus, it's surely time for people to oppose evil lying on civil defence to save lives in all disasters from storms to conventional war, to collateral damage risks in nuclear terrorism by mad enemies. At some point, the majority has to decide to either defend itself honestly and decently against barbarism, or be consumed by it as a price for believing bullshit. It's time for decent people to oppose lying evil regarding the necessity to have credible tactical (not incredible strategic) nuclear weapons, as Oppenheimer called for in his 1951 speech, to deter invasions.
Democracy can't function when secrecy is used to deliberately cover-up vital data from viewing by Joe Public. Secrecy doesn't protect you from enemies who independently develop weapons in secret, or who spy from inside your laboratories:
"The United States and Great Britain resumed testing in 1962, and we spared no effort trying to find out what they were up to. I attended several meetings on that subject. An episode related to those meetings comes to mind ... Once we were shown photographs of some documents ... the photographer had been rushed. Mixed in with the photocopies was a single, terribly crumpled original. I innocently asked why, and was told that it had been concealed in panties. Another time ... questions were asked along the following lines: What data about American weapons would be most useful for your work and for planning military technology in general?"
- Andrei Sakharov, Memoirs, Hutchinson, London, 1990, pp225-6.
Nuclear saber-rattling from Russian propagandists. They think tactical nuclear weapons aren't enough, and strategic ones should be used.
Review of Peter Kuran's excellent "Neutron Bomb Movie".
Below is a brief clip for review purposes from a longer newsreel of President Eisenhower, enthusiastically promoting the 96% clean fusion Poplar nuclear test (detonated 12 July 1958). On 30 October 1962, Kennedy tested… pic.twitter.com/y4QpR5eCum
More news of Russian TV population preparation for nuclear escalations, which the Western media and politicians continue to ignore as propaganda, just as Novichok and the Ukraine invasion prep was ignored as propaganda bluff, until it took us by "surprise". We need to prepare now https://t.co/tiFmJw0Htq
ABOVE: The British government has now declassified detailed summary reports giving secret original nuclear test data on the EMP (electromagnetic pulse) damage due to numerous nuclear weapons, data which is still being kept under wraps in America since it hasn't been superseded because Western atmospheric nuclear tests were stopped late in 1962 and never resumed - even though the Russians have even more extensive data - completely debunking Glasstone and Dolan's disarmament propaganda nonsense in the 1962, 1964 and 1977 Effects of Nuclear Weapons which ignores EMP piped far away from low altitude nuclear tests by power and communications cables and falsely claims instead that such detonations don't produce EMP damage outside the 2psi blast radius! For a discussion of the new data and also a link to the full 200+ pages version (in addition to useful data, inevitably like all official reports it also contains a lot of "fluff" padding), please see the other (physics) site: https://nige.wordpress.com/2023/09/12/secret-emp-effects-of-american-nuclear-tests-finally-declassified-by-the-uk-and-at-uk-national-archives/ (by contrast, this "blogspot" uses old non-smartphone proof coding, no longer properly indexed any long longer by "google's smartphone bot"). As long ago as 1984, Herman Kahn argued on page 112 of his book Thinking About the Unthinkable in the 1980s: "The effects of an EMP attack are simply not well understood [in the West, where long powerlines were never exposed on high altitude nuclear tests, unlike the Russian's 1962 Operation K, so MHD-EMP or E3 damage wasn't even mentioned in the 1977 Glasstone and Dolan Effects of Nuclear Weapons], but the Soviets seem to know - or think they know - more than we do."
ABOVE: Moscow Metro and Metro-2 (secret nuclear subway) horizonially swinging blast doors take only 70 seconds to shut, whereas their vertically rising blast doors take 160 seconds to shut; both times are however far shorter than the arrival time of Western ICBMs or even SLBMs which take 15-30 minutes by which time the Russian shelters are sealed from blast and radiation! In times of nuclear crisis, Russia planned to evacuate from cities those who could not be sheltered, and for the remainder to be based in shelters (similarly to the WWII British situation, when people slept in shelters of one kind or another when there was a large risk of being bombed without notice, particularly in supersonic V2 missile attacks where little warning time was available).
NUKEGATE - Western tactical neutron bombs were disarmed after Russian propaganda lie. Russia now has over 2000... "Disarmament and arms control" charlatans, quacks, cranks, liars, mass murdering Russian affiliates, and evil genocidal Marxist media exposed for what it is, what it was in the 1930s when it enabled Hitler to murder tens of millions in war. Glasstone's and Dolan's 1977 Effects of Nuclear Weapons deceptions totally disproved. Professor Brian Martin, TRUTH TACTICS, 2021 (pp45-50): "In trying to learn from scientific publications, trust remains crucial. The role of trust is epitomised by Glasstone’s book The Effects of Atomic Weapons. Glasstone was not the author; he was the editor. The book is a compilation of information based on the work of numerous contributors. For me, the question was, should I trust this information? Was there some reason why the editors or authors would present fraudulent information, be subject to conflicts of interest or otherwise be biased? ... if anything, the authors would presumably want to overestimate rather than underestimate the dangers ... Of special interest would be anyone who disagreed with the data, calculations or findings in Glasstone. But I couldn’t find any criticisms. The Effects of Nuclear Weapons was treated as the definitive source, and other treatments were compatible with it. ... One potent influence is called confirmation bias, which is the tendency to look for information that supports current beliefs and dismiss or counter contrary information. The implication is that changing one’s views can be difficult due to mental commitments. To this can be added various forms of bias, interpersonal influences such as wanting to maintain relationships, overconfidence in one’s knowledge, desires to appear smart, not wanting to admit being mistaken, and career impacts of having particular beliefs. It is difficult to assess the role of these influences on yourself. "
Physical understanding of the blast wave and cratering
(This post is being revised, corrected and updated as of 8 August 2009. Greek symbols for density, Pi, etc., will just appear as p in some browsers which do not support the character sets. The page displays correctly in Internet Explorer 7.)
ABOVE: peak overpressures in psi (pounds/sq. inch; 1 psi = 6.9 kilopascals, kPa) with distances scaled by the cube-root of yield to apply to a standard reference total yield of 1 kiloton. All tests shown are surface bursts, 1 kt to 14.8 Mt, which have an effective blast yield of about 1.68 times that in a free air burst (an air burst in sea level air well away from any solid reflective surface). Data from WT-934 (1959), page 29, and have been scaled to 1 atmosphere ambient air pressure and 20 C ambient air temperature.
A shock wave is caused by the rapid release of either compressed fluid or energy, explosively heating and compressing fluid. A ‘blast wave’ is a shock wave in air, a compressed shock front accompanied by a blast of outward wind pressure. The shock front has an abrupt pressure rise because the air at the shock front is travelling into cold air which reduces its speed, while the hot air inside the shock front moves out faster, catching up with it to converge in the wall of compressed air. Within this the overpressure region or shock front, wind travels outward from the explosion, but within the inner area of low pressure, wind blows in the opposite direction, towards ground zero, allowing the return of air into the partial vacuum in the middle. At any fixed location, the blast first blows outward during the overpressure phase, and then reverses and blows inwards at a lower speed but for a longer duration during the ‘suction’ phase. Overpressure, p, acts in all directions within the shock front and is defined as the excess pressure above the normal atmospheric pressure (which is on average 101 kPa or 14.7 pounds per square inch at sea level). Dynamic pressure, q, acts only in the direction of the outward or reversed blast winds accompanying the shock wave, and is the wind pressure, exactly equivalent to a similar gust of wind with the same velocity and duration.
The blast wave must engulf, heat and compress all of the air that it encounters as a result of its supersonic spherically divergent expansion. Consequently, its energy is continuously being distributed over a larger mass of air that rapidly reduces the energy available per kilogram of air, so the overpressure drops rapidly. Some energy is lost in surface bursts in forming a crater and melting a thin layer of surface sand by conduction and radiation. Initially the shock front is also losing energy by the emission of thermal radiation to large distances. When the blast wave hits an object, the compressed shock front exerts an all-round crushing-type overpressure, while the outward blast wind contributes a hammer blow that adds to the overpressure, followed by a wind drag to roof materials, vehicles, and standing people. The total force exerted by the blast is equal to pressure multiplied by exposed surface area, but if the object is sufficiently rigid to actually stop and reflect the shock wave, then it collides with itself while being reflected, reducing its duration but increasing its peak pressure. P. H. Hugoniot in 1887 derived the basic equations governing the properties of a gaseous shock wave in a piston, relationships between density, pressure and velocity. Lord Kelvin later introduced the concept ‘impulse’ (the time-integrated pressure of a fluid disturbance), when he was working on vortex atom theory.
The peak pressure in the air blast wave has 4 contributions: the ambient pressure, the isothermal sphere, the shock front and the sonic wave. These are represented by terms including the factors P0, 1/R3, 1/R2, and 1/R, respectively, where P0 is the ambient (normal) air pressure at the altitude of interest and R is the distance from the explosion. The equation of state for air gives the base equation for the total pressure, P = (g - 1) E/V, where g = 1.4 is the ratio of specific heat capacities of air (at high temperatures it can drop to 1.2 owing to the vibration energy of molecules, while molecular dissociation into atoms increases it towards 1.67, for a monatomic gas; these two offsetting effects keep it at 1.4), E is the total blast energy and V is the blast wave volume. We now discover a generalised summation using dimensional analysis that automatically includes all of the four separate blast wave terms, already discussed
P= å {[(g- 1)E/V]n/3P01 –(n/3)}, where the summation is for: n = 0, 1, 2, and 3.
For a free air burst, V = (4/3)pR3, so for g = 1.4, R in km, and blast yield X kilotons:
For high altitude bursts, the air pressure at altitude H km is P0 = 101e-H/ 6.9 kPa. For sea level air, P0 = 101 kPa, so the peak overpressure, p = P – P0, is:
p = (16.0X1/3/R) + (2.53X2/3/R2) + (0.400X/R3) kPa
For direct comparison, the peak overpressure graph for American sea level free air bursts (DNA-EM-1, 1981, and The Effects of Nuclear Weapons, 1977, Fig. 3.72) implies:
p = (3.55W1/3/R) + (2.00W2/3/R2) + (0.387W/R3) kPa,
where W is the total weapon yield in kilotons. In deriving this formula, we produced fits to both the surface burst and free air burst curves, and averaged them to find an effective yield ratio of 1.68 for surface bursts to free air bursts (due to reflection by the ground in a surface burst, which results in a hemispherical blast with nearly double the energy density of a free air burst, with some energy loss due to surface interaction effects like melting the surface layer of sand into fused fallout particles, ground shock and cratering). This comparison of theory and measurement shows close agreement for the 1/R2 and 1/R3 high overpressure terms, where the exact blast yield fractions are 0.703 and 0.968, respectively. The fraction of the explosion energy in blast is highest at high overpressures where the shock front has not lost much energy by radiation or degradation; but for the weak or sonic blast wave (1/R term) the fraction is only 0.0109 owing to these losses. The American book, The Effects of Nuclear Weapons (1957-77 editions) gives a specific figure of 50% for the sea level blast yield, but this time-independent generalisation is a totally misleading fiction. It is obtained by the editors of the American book by subtracting the final thermal and nuclear radiation yields from 100%, neglecting blast energy that is dissipated with time for crater excavation, fallout particle melting, and the massive cloud formation. Initially, almost all of the internal energy of the fireball goes into the blast wave, but after the thermal radiation pulse, the blast or sonic wave eventually contains only 1.09% of the energy.
Note that the revised EM-1 manual and its summary by John A. Northrop, Handbook of Nuclear Weapons Effects (DSWA, 1996, p. 9), suggests a formula for free air bursts which has differences to that given above. Northrop's compilation and Charles J. Bridgman's Introduction to the Physics of Nuclear Weapons Effects, DTRA, 2001, p. 285) give for a free air burst:
p = (0.304W/R3) + (1.13W2/3/R2) + (1.00W1/3/[RA]) kPa,
where W is the total weapon yield in kilotons, R is in km, and A = {ln[(R/445.52) + 3*exp(-{R/445.52}1/2/3)]}1/2. Bridgman gives a graph of peak overpressures (Fig. 7-6 on p. 285) showing 500 kPa peak overpressure at 100 m, 30 kPa at 400 m, and 8 kPa at 1 km from a 1 kt (total yield) nuclear free air burst. [1 psi = 6.9 kPa.] He also reproduces the curves for dynamic and overpressure positive phase duration, Mach stem height, etc., from chapter 2 of Dolan's DNA-EM-1.
where R is distance in feet (notice that 2640 feet is half a statute mile, 5280 feet). The numerical constants in this formula were only approximate in 1957, but it may be possible to update them with modern data.
The low energy in the blast wave at long ranges is consistent with the fact that the physically accurate cloud rise model in Hillyer G. Norment's report DELFIC: Department of Defense Fallout Prediction System, Volume I - Fundamentals, Atmospheric Sciences Associates, DNA-5159F-1 (1979) finds that for the mushroom cloud expansion observed it is required that 45% of the bomb yield ends up as the hot air in and surrounding the fireball (dumped from the back of the blast wave) which produces the convective mushroom cloud phenomena. This 45% figure is mainly blast wave energy left behind by the blast wave in the air outside the visibly glowing fireball region. If the blast wave energy remained in the shock front indefinitely, then there would be no mushroom cloud phenomena because the vast amount of energy needed wouldn't be available to cause it! That doesn't happen: the blast wave irreversibly heats up the air it engulfs, and continually dumps warmed air from the back of the blast wave which moves back into the near vacuum towards ground zero, causing the reversed wind direction (suction) phase, while the shock front is still moving outwards. The energy of the heated air forming these afterwinds is the main contributor to the mushroom cloud rise energy.
In a land surface burst, the blast volume for any radius is only half that of a free air burst, because the blast is confined to a hemispherical shape rather than a sphere. Therefore, in the event of an ideal, rigid, reflecting surface, the blast would be identical to that from a free air burst with twice the energy, or 2W. The effective yield of a surface burst on land or water, as determined from 70 accurate measurements at 7 American tests conducted in the Nevada and at Eniwetok and Bikini Atolls from 1951-4 (Sugar, Mike, Bravo, Romeo, Union, Yankee and Nectar), for scaled distances equivalent to 55-300 m from a 1 kiloton burst, is actually only 1.68W. Hence, about 16% of the energy of a surface burst goes into the ground/water shock wave, crater, and in melting fallout or vaporising seawater: if a sea level air burst has an effective blast yield of 50%, a surface burst has a blast yield of only 50*(1.68/2) = 42%.
Close to detonation, the fireball arrival time is theoretically proportional to (radius, r)5/2, but at great distances blast arrival time is equal to (r/c) – (R/c), where R is the thickness of the blast wave or head start and c is the sound velocity (this incorporates the boost the blast wave gets early on while it is supersonic). Using 1959 weapon test report WT-934 data from Sugar, Mike, and Operation Castle surface burst nuclear tests, with cube-root scaling of both the arrival times and distances [cube-root scaling is as (yield)1/3] to 1-kt, we combine both rules to obtain generalised, universal blast arrival time formula for 1-kt surface bursts:
t = r / [0.340 + (0.0350/ r3/2) + (0.0622/ r)] seconds,
where r is in km and the term 0.340 is the speed of sound in km/s. To use this equation for other yields (or for air bursts) it is just necessary to scale both the time and distance down to a 1-kt surface burst blast equivalent using the cube-root scaling law.
When a nuclear weapon is air burst, the blast wave along the ground is modified by the surface reflection (Nevada desert terrain reflects 68% of the blast energy) in which the reflected blast moves through air already heated by the direct blast, so moving faster and merging with it. The total energy of this merged blast wave will therefore be 1 + 0.68 = 1.68 times that in a free air burst at a similar distance in infinite air. Because the range of any blast pressure is proportional to the cube-root of the energy, this means that in a surface burst the ranges of the merged blast wave (Mach stem) will be (1.68)1/3 = 1.19 times greater than for a free air burst. This increase was observed in ordinary TNT bursts; but in nuclear explosions there are two further factors of importance, first seen at the 1945 Trinity test. First, nuclear bursts emit thermal radiation that heats the surface material, in turn heating the surface air by convection and allowing the blast wave to travel faster along the ground at higher overpressure. Hence, British nuclear test measurements of overpressures made with sensors on the tops of towers gave lower readings than American instruments close to the ground. Second, thermal radiation explodes the silicate sand crystals on a desert surface, like exploding popcorn, creating a very hot cloud of dust about 2-3 m high, called the ‘thermal layer’.
The blast ‘precursor’ which was filmed around the fireball in the 1945 Trinity nuclear test was caused by thermal radiation pop-corning the desert sand into a cloud of hot gas through which the blast wave then moved faster than through cold air (because hot air adds more energy to the blast than cold air). The density of the dust loading in the precursor increased the fluid (air) inertia, reducing the peak overpressure but increasing wind (dynamic) pressure (which is proportional to density). American measurements on the precursor blast in Nevada tests Met, Priscilla, and Hood allowed development of a mathematical model in 1995 which includes thermal pop-corning (blow-off) of the desert surface, thermal layer growth, blast modification and the prediction of precursor effects on the waveforms of overpressure and dynamic pressure. This model was produced in secret for section 2 of Chapter 2 in Capabilities of Nuclear Weapons, EM-1: ‘Air Blast over Real (non-ideal) Surfaces’.
When the blast travels through this layer it billows upwards to 30 m in height, and the overpressure is actually reduced to 67% of normal because the mass of dust loading increases the air’s inertia. But the dynamic/drag pressure is increased by several times because it is proportional to the new higher air density (including dust), and dramatically increases ranges of destruction to wind-drag sensitive targets! This occurs in surface bursts of over 30 kt yield and in air bursts within 240W1/3 m of silicate or coral sand, where W is yield in kt; precursors occurred over coral islands in the 14.8 Mt Bravo test of 1954. The maximum ground range to which precursors are observed in bursts over sandy ground is 350W1/3 m. No precursor has been observed over water or ground covered in white smoke. Concrete, ice, snow, wet ground, and cities would generally reflect the thermal flash and not produce a thermal precursor. The precursor is most important at high overpressures where the thermal heating effect is greatest: no precursor or blast pressure change effect occurs below 40 kPa peak overpressure. A precursor will reduce a predicted 70 kPa peak overpressure to 84%, will reduce a predicted 85 kPa to 80%, will reduce 140 kPa to 75%, and will reduce predicted 210-3,500 kPa to 67% (Philip J. Dolan, ‘Capabilities of Nuclear Weapons’, Pentagon, DNA-EM-1, Fig. 2-21, 1981).
In 1953, interest focussed on the increased drag damage due to vehicles and wind sensitive targets exposed to the precursor from the Grable test. In 1955 it was discovered at the Teapot Nevada tests that the temperature of the precursor dust cloud reached 250 C at 40 milliseconds after the arrival of the blast wave (U.S weapon test report WT-1218). The hot precursor dust burned the skin of animals in an open shelter (protecting against thermal radiation) at 320 m from a 30-kt tower burst (report WT-1179). Japanese working in open tunnel shelters 90 m from ground zero at Nagasaki reported skin burns from the blast wind, but their overhead earth cover shielded out the radiation. At 250 C, skin needs exposure for 0.75 second to produce skin reddening, 1.5 seconds to produce blistering, and 2.3 seconds to cause charring (at 480 C, these exposure times are reduced by a factor of 10).
Small rain or mist droplets (0.25 cm/hour rainfall rate) and fog droplets are evaporated by the warm blast wave, reducing the peak overpressure and overpressure duration each by about 5 %. This was observed in TNT bomb tests in 1944 (Los Alamos report LA-217). Large droplets in heavy rainfall (at 1.3 cm/hour) are broken up by the blast before evaporating, which causes a 20 % reduction in peak overpressure. This was observed when heavy rainfall occurred over part of Bikini Atoll during the 110 kt Koon nuclear test in 1954; comparison of peak overpressures on each side of ground zero indicated a 20% reduction due to localised heavy rain (report WT-905).
Dr William Penney who measured blasts from the early American nuclear tests and was test director during the many Australian-British tests at Monte Bello, Emu Field and Maralinga, in 1970 published the results (Phil. Trans. Roy. Soc. London, v. 266A, pp. 358-424): ‘nuclear explosives cause the air near the ground to be warmed by heating through the heat flash.’ This has two important implications that are ignored by the American publications on blast. First, since the heat flash scales more rapidly than the cube root of yield (which is used for blast), the thermal enhancement increases out of step (so test data from 30-kt bursts show more thermal enhancement than 1-kt tests). Second, Penney had blast gauges both at ground level and on poles 3-m above the ground in Maralinga, where there was red desert soil that readily absorbed the heat flash. The peak overpressures at ground level are significantly higher than at 3-m height. The average pressure, causing the force loading and the damage, to a 10-m high building is therefore less than that measured at ground level.
At 408 m a 1-kt burst at 250-m altitude, Penney points out that his scaled data for a marked thermal layer effect (red desert soil) gives 58-kPa, whereas the American government manual gave 77-kPa for ‘nearly ideal’ conditions, an increase of over 30%. Penney’s data for no thermal effect gave 71-kPa, indicating that the American test data had been scaled down from a higher yield than the British test, where thermal heating was greater. Ignoring thermal flash absorption for the short ranges of interest, the thermal energy ranges scale in proportion to W/r2 where W is yield and r is distance, while the blast ranges are scaled by W1/3, so the thermal energy received at any given scaled blast range varies as W/(W1/3)2 = W1/3. Therefore, when serious thermal heating occurs, the peak overpressures scale up with yield in addition to distances. There is little effect in a surface burst (unless the fireball is very large) because thermal is then emitted parallel to the ground and is not absorbed by the ground, and American high yield tests occurred over transparent water which did not heat up at the surface. A 10-Mt air burst detonation over dark coloured ground would deposit 10 times as much thermal energy on the ground at the scaled blast ranges measured for 10-kt tests in America and Australia, so there would be much greater thermal enhancement of the blast ranges.
In addition to this fact about blast data analysis from nuclear tests, there is another point made by Penney. The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’
A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.
Mathematical representation of ideal pressure-time curves
In general, Dr Brode’s empirical and semi-empirical formulae are extremely useful, but there are problems when it comes to the pressure-time form factors. Brode uses the sum of three exponential terms to represent the family of pressure-time curves in the positive (compression) phase for a location receiving any particular peak overpressure. The issue we have with Brode is that the analytically correct physical theory gives a much simpler formula, and this illustrates the issue between the use of computers and the use of physical understanding. The time-form graphs given by Brode in his 1968 article do not correlate with the formulae he provides or with Glasstone and Dolan 1977, although they do correlate with Glasstone 1962/4.
The general form of Brode’s formula is like Pt = Pmax (1 – t/Dp+)(xe-at + ye-bt + ze-ct). The decay constants like a, b and c are themselves functions of the peak overpressure. It is thus very complex. Pt is the time-varying overpressure, Pmax is the peak overpressure, t is time measured from blast arrival time (not from detonation time!), and Dp+ is the positive phase overpressure duration.
Now consider the actual physics. The time decay of overpressure at a fixed location while the blast wave passes by in a shock-tube (a long, uniform, air filled cylinder) where the blast is unable to diverge sideways as it propagates, is Pt = Pmax (1 – t/Dp+)e-at. In a real air burst, however, the pressure decays additionally by divergence with time as air has another dimension in which to fall off (sideways). This dimension is the transverse dimension, i.e., circumference C, which is proportional to the radius r of the blast by the simple formula C = 2pr. In other words, as the blast sphere gets bigger, the pressure falls off everywhere because there is a greater volume for the air to fill. We are interested in times not radii or circumference, but blast radius is approximately proportional to the time after detonation. Hence, we can adapt the shock-tube blast decay formula for the additional fall caused by sideways divergence of the expanding blast by dividing it by a normalised function of time and pressure (unity is added in the denominator because t is time after blast arrival, not time after explosion):
This formula appears to model the pressure-time curves accurately for all peak overpressures (a ~ 0 if just considering the positive or compression phase, Po is ambient pressure). The fall of the wind pressure (dynamic pressure), q, is related to this decay rate of overpressure by standard relationships discussed by Glasstone and Dolan for the case where g= 1.4: qt = q (Pt /Pmax)2[(Pmax + 7Po)/ (Pt + 7Po)].
‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies ... coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance ... Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al., Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.
The latest crater scaling laws are described in the report:
In the range of 1 kt – 10 Mt there is a transition from cube-root to fourth-root scaling, and the average scaling law suggested by Nevada soil and Pacific coral Atoll data, W0.3 (used by Glasstone and Dolan) was shown to be wrong in 1987 because empirical data was too limited (the biggest Nevada cratering test was Sedan, 104 kt) and the W0.3 empirical law ignored energy conservation at high yields, where gravity effects kick in and curtail the sizes predicted by hydrodynamic cratering physics.
The W0.3 scaling law used in Glasstone and Dolan 1977 is false because it violates the conservation of energy, used by the explosion in ejecting massive amounts of debris from the crater against gravity. The yield-dependent scaling for crater dimensions (radius and depth) transitions from the cube-root of yield scaling at low yields (below 1 kt) to fourth-root at high yields, because of gravity. At low yields, the fraction of the bomb energy used to physically dump ejecta out of the crater against gravity (to produce the surrounding lip and debris) is trivial compared to the hydrodynamic energy being used used to physically break up the soil. But at higher yields, the fact that the crater is deep means that a significant amount of bomb energy must now be employed to do work excavating earth against gravity.
Consider the energy utilisation in cratering. The total energy done by cratering is the sum of the hydrodynamic energy and gravitational work energy. The hydrodynamic term is shown to be proportional to the cube of the crater radius or depth, as shown by the reliability of cube-root scaling at subkiloton yields: the energy needed to hydrodynamically excavate a unit volume of soil by hydrodynamic cratering action is a constant, so the energy required for hydrodynamic pulverization of crater mass m is E = mX where X is the number of Joules needed in cratering for the hydrodynamic excavation of 1 kg of soil.
But where the crater is deep in bigger explosions, the gravitational work energy E = mgh needed to eject crater mass m the vertical distance h upwards out of the hole to the lip, against gravitational acceleration g (9.8 ms-2)becomes larger than the hydrodynamic energy needed to merely break up the matter, so the gravity work effect then governs the crater scaling law. The total energy used in crater formation is the sum of two terms, hydrodynamic and gravitational: E = (mX) + (mgh).
The (mX)-term is proportional to the cube of the crater depth (because m is the product of volume and density, and volume is proportional to depth-cubed if the crater radius/depth ratio is constant), while the (mgh)-term is proportional to the fourth-power of the crater depth because m is proportional to the density times the depth cubed (if the depth/radius ratio is constant) and h is always directly proportional to the crater depth (h is roughly half the crater depth), so the product mgh is proportional to the product of depth cubed and depth, i.e., to the fourth-power of crater depth. So for bigger craters and bigger bomb yields, a larger fraction of the total cratering energy then gets used to overcome gravity, causing the gravity term to predominate and the crater size to scale at most by W1/4 at high yields. This makes the crater size scaling law transition from cube-root (W1/3) at low yields to fourth-root (W1/4) at higher yields!
It’s fascinating that, despite the best scientific brains working on nuclear weapons effects for many decades - the Manhattan Project focussed a large amount of effort on the problem, and utilised the top physicists who had developed quantum mechanics and nuclear physics, and people like Bethe were still writing secret papers on fireball effects into the 1960s - such fundamental physical effects were simply ignored for decades. This was due to the restricted number of people working on the problem due to secrecy, and maybe some kind of ‘groupthink’ (psychological peer-pressure): not to upset colleagues by ‘rocking the boat’ with too much freethinking, radical questions, innovative ideas.
The equation E = mgh isn't a speculative theory requiring nuclear tests to confirm it, it's a basic physical fact that can be experimentally proved in any physics laboratory: you can easily measure the energy needed to raise a mass (the amount of electric energy supplied to an electric motor while it winches up a standard 1 kg mass is a simple example of the kind of physical fact involved). In trying to analyse the effects of nuclear weapons, false approximations were sometimes used, which then became imbedded as a doctrine or faith about the ‘correct’ way to approach or analyze a particular problem. People, when questioned about a fundamental belief in such analysis, then are tempted respond dogmatically by simply referring to what the ‘consensus’ is, as if accepted dogmatic religious-style authority is somehow a substitute science, which is of course the unceasing need to keep asking probing questions, checking factual details for errors, omissions and misunderstandings, and forever searching for a deeper understanding of nature.
For example, in the case of a 10 Mt surface burst on dry soil, the 1957, 1962, and 1964 editions of Glasstone's Effects of Nuclear Weapons predicted a crater radius of 414 metres (the 10 Mt Mike test in 1952 had a radius of over twice that size, but that was due to the water-saturated porous coral of the island and surrounding reef, which is crushed very easily by the shock wave at high overpressures). This was reduced to 295 metres in Glasstone and Dolan, 1977, when the scaling law was changed from the cube-root to the 0.3 power of yield. The 1981 revision of Dolan's DNA-EM-1 brings it down to 145 metres, because of the tiny amount of energy which goes into the bomb case shock for a modern, efficient 10 Mt class thermonuclear warhead (Brode and Bjork discovered this bomb design effect on cratering in 1960; high-yield efficient weapons release over 80% of their yield as X-rays which are inefficient at cratering because they just cause ablation of the soil below the bomb, creating a shock wave and some compression, but far less cratering action than the dense bomb case shock wave produces in soil). Then in 1987, the introduction of gravity effects reduced the crater radius for a 10 Mt surface burst on dry soil to just 92 metres, only 22% of the figure believed up to 1964!
‘It is shown that the primary cause of cratering for such an explosion is not “airslap,” as previously suggested, but rather the direct action of the energetic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. [Hence the immense crater size from the 10 Mt liquid-deuterium Mike test in 1952 with its massive 82 ton steel casing shock is irrelevant to compact modern warheads which have lighter casings and are more efficient and produce smaller case shocks and thus smaller craters.]’ - H. L. Brode and R. L. Bjork, Cratering from a Megaton Surface Burst, RAND Corp., RM-2600, 1960.
‘Data on the coral craters are incorporated into empirical formulas used to predict the size and shape of nuclear craters. These formulas, we now believe, greatly overestimate surface burst effectiveness in typical continental geologies… coral is saturated, highly porous, and permeable ... When the coral is dry, it transmits shocks poorly. The crushing and collapse of its pores attenuate the shock rapidly with distance… Pores filled with water transmit the shock better than air-filled pores, so the shock travels with less attenuation and can damage large volumes of coral far from the source.’ – L.G. Margolin, et al.,Computer Simulation of Nuclear Weapons Effects, Lawrence Livermore National Laboratory, UCRL-98438 Preprint, 25 March 1988, p. 5.
As L.G. Margolin states (above), improved understanding of crater data from the 1952-8 nuclear tests at Bikini and Eniwetok Atolls led to a reduction of predicted crater sizes from land bursts. The massive crater, 950 m in radius and 50 m under water (53 m deep as measured from the original bomb position), created by the 10.4 Mt Mike shot at Eniwetok in 1952, occurred in the wet coral reef surrounding an island because fragile water-saturated coral is pulverised to sand by shock wave pressure. Revised editions of the U.S. Department of Defence books The Effects of Nuclear Weapons and the classified manual Capabilities of Nuclear Weapons (secret) diminished the crater radius for a surface burst on dry soil:
In the 1957-64 editions, the crater radius was scaled by the well-proved TNT cratering ‘cube-root law’, W1/3, (which is now known to be valid where the work done by excavating against gravity is trivial in comparison to the work done in breaking up material). In the 1977 edition, the crater radius was scaled by less than the cube-root law, in fact the 0.3 power of yield, W0.3, in an effort to fit the American nuclear test data. Unfortunately, as shown in the following table, the American nuclear test data is too patchy for proper extrapolation to be made for dry soil surface bursts, because the one high yield (104-kt Sedan) Nevada explosive-type crater burst was buried at a depth of 194 m. This changes two sensitive variables at the same time, preventing reliable extrapolation.
*These bombs were at the bottom of the water tank, with 3 m of water above and around to increase the case-shock effect by X-ray absorption in water.
**650 kg device mass. The Cactus crater was in 1979 used to inter (under a concrete dome) some 84,100 m3 of contaminated topsoil and World War II munitions debris on Eniwetok Atoll in the American clean-up and decontamination work. The initial average height of the lip of this crater was 3.35 m.
During World War II, experiments showed that W kt of TNT detonated on dry soil produces a crater with a radius of 30W1/3 m. The radius of W kt of TNT is 5.4W1/3 m, or 18% of the dry soil crater radius. The crater is almost entirely due to the ‘case shock’ of a nuclear weapon, not the X-ray emission. This was discovered in the experiments with Koa and Seminole in water tanks to increase X-ray coupling to the ground (see table above). Nuclear weapons with yields below 2-kt (high mass to yield ratio, and low X-ray energy emission) which are surface burst produce craters similar to those from 23% of the TNT equivalent, while high-yield nuclear weapons (low mass to yield ratio, and high X-ray energy emission) which are surface burst produce craters similar to those from 2.9% of the TNT equivalent.
*These sizes apply to low yield-to-mass ratio nuclear warheads that incur a low X-ray energy emission. These produce the greatest craters, because most of the energy is initially in the case-shock of the bomb, rather than in X-rays (see below). These radii should be corrected for X-ray emission and total yield by the multiplying factor which can reasonably be taken to be 1.41(fW )1/3 (1 + 1.82W1/4 )-1/3, see below for derivation including gravitational effect at high yields. This factor is 1 for case-shock energy fraction f and total yield W kilotons both equal to 1. For pure fission warheads, f = 1. For a 1-megaton modern thermonuclear warhead, f = 1/8 because of the lower case-shock energy and higher proportional of energy in X-rays.
**These sizes apply to a different mechanism of cratering; namely the crushing of porous coral by the shock wave, so simple ‘cube-root’ scaling applies here.
About 72% of the energy entering the ground from a TNT explosion is used in cratering, while 28% is used in producing ground shock. The main ground shock from a surface burst nuclear explosion is derived from 7.5% of the total X-ray emission, which is absorbed by the ground within a radius of 3W1/3 m. The downward recoil of the ground in response to the explosive ablation of surface sand initiates a ground shock wave within a microsecond. The case shock of a nuclear weapon delivers 50% of its energy downward, which is all absorbed by the ground on account of its high density, and this is the principal crater mechanism. As debris is ejected from the crater in a cone shape, it absorbs some of the thermal radiation from the fireball within, and is melted, later becoming contaminated and being deposited as fallout. When nuclear weapons are detonated underground, the true TNT equivalent for a similar crater is 30% of the nuclear yield, because the X-rays cannot escape into the air, although a lot of energy is then wasted in melting and heating soil underground.
The long delay in nuclear effects people understanding crater scaling laws properly has an interesting history. Although Galileo identified craters on the moon using his telescope in 1609, it was only when a couple of astronauts from Apollo 14 visited an allegedly ‘volcanic lava crater’ (crater Fra Mauro) on the moon that they discovered the ejecta from a shallow explosion crater, without any volcanic lava. The idea of explosive cratering had been falsely discounted because physicists had observed very few craters on the earth and many on the moon. They had falsely assumed that the reason for this was strong volcanism on the moon, when it is really due to impact craters having been mostly eroded by geological processes on earth, and mostly preserved on the moon!
Early theoretical studies of crater formation, even using powerful computer simulations, employed explosion dynamics that ignored gravitation. Almost all of the books on the ‘effects of nuclear weapons’ in the public domain give nonsense for megaton surface bursts. It was only in 1986 that a full study of the effects of gravity in reducing crater sizes in the megaton range was performed: R. M. Schmidt, K. A. Holsapple, and K. R. Housen, ‘Gravity effects in cratering’, U.S. Department of Defense, Defense Nuclear Agency, report DNA-TR-86-182. In addition to secrecy issues on the details, the complexity of the unclassified portions of the new scaling procedures in this official treatment cover up the mechanisms, so here is a simple analytical explanation which is clearer:
If the energy used in cratering is E, the cratered mass M, and the explosive energy needed to physically break up a unit mass of the soil under consideration is X, then the old equation E = MX (which implies that crater volume is directly proportional to bomb yield and hence crater depth and diameter scale as the cube-root of yield) is completely false, as it omits gravitational work energy needed to shift soil from the crater to the surrounding ground.
This gravitational work energy is easy to estimate as ½ MgD, where M is the mass excavated, g is gravitational acceleration (9.8 m/s2 ), D is crater depth, and ½ is a rough approximation of the average proportionof the crater depth which displaced soil is vertically moved against gravity in forming the crater.
Hence the correct cratering energy not E = MX but rather E = MX + ½MgD. For yields well below 1-kt, the second term (on the right hand side) of this expression, ½ MgD, is insignificant compared to MX, so the volume excavated scales directly with yield, and since the volume is proportional to the cube of the average linear dimension, this means that the radius and depth both scale with the cube-root of yield for low yields.
But for very large yields, the second term, ½MgD, becomes more important, and this use of energy to overcome gravity in excavation limits the energy available for explosive digging, so the linear dimensions then scale as only the fourth-root (or quarter-power) of yield. Surface burst craters are paraboloid in shape, so they have a volume of: pR2 D/2 = (p /2)(R/D)2 D3, where the ratio of R/D is about 1.88 for a surface burst on dry soil. The mass of crater material is this volume multiplied by the density, r , of the soil material: M = rp(R/D)2 D3 /2.
Hence, the total cratering energy is: E = MX + ½ MgD = r (p /2)R2 D(X + ½gD).
The density of hard rock, soft rock and hard soil (for example granite, sandstone or basalt) is typically 2.65 kg/litre (2,650 kg per cubic metre), wet soil is around 2.10 kg/litre, water saturated coral reef is 2.02 kg/litre, typical dry soil is 1.70 kg/litre, Nevada desert is 1.60 kg/litre, lunar soil is 1.50 kg/litre (for analysis of the craters on the moon, where gravity is 6 times smaller than at the earth’s surface), and ice is 0.93 kg/litre.
The change over from cube-root to quarter-root scaling with increasing yield means that old crater size estimates (for example, those in the well-known 1977 book by Glasstone and Dolan, U.S. Department of Defence, 1977, The Effects of Nuclear Weapons) are far too big in the megaton range, and need to be multiplied by a correction factor.
The correction factor is easy to find. The purely explosive cratering energy efficiency, f, falls as gravity takes more energy, and is simply f =MX/(MX + ½MgD) = (1 + ½gD/X)-1.
Because gravity effects are small in the low and sub kiloton range, the correct crater radius for small explosions indeed scales hydrodynamically, as R ~ E1/3, so the 1-kt crater sizes in Glasstone and Dolan should be scaled by the correct factor R ~ W1/3(1 + ½ gD/X)-1/3 instead of by the empirical factor of R ~ W0.3 given by Glasstone and Dolan for Nevada explosion data of 1-100 kt. Glasstone and Dolan overestimates crater sizes by a large factor for megaton yield bursts. (The Americans had been mislead by data from coral craters, since coral is porous and is simply crushed to sand by the shock wave, instead of being excavated explosively like other media.
In megaton surface bursts on wet soft rock, the depth D increases only as W1/4, the ‘fourth root’ or ‘one-quarter power’ of yield scaling. Obviously for small craters, D scales as the cube-root of yield, but the correction factor (1 + ½ gD/X)-1/3 is only significant for the megaton range anyway, so a good approximation is to put D in this correction as proportional to the fourth-root of yield in this correction factor formula. The value of X for any soil material is a constant which may be easily calculated from the published crater sizes for a 1 kt surface burst, where gravity is not of importance (X is the cratered mass divided by the energy used in cratering, the latter being determined by an energy balance for the explosion effects).
The crater is made by two processes: the shock wave pulverisation of the soil (the energy required to do this is approximately proportional to the mass of soil pulverised) and the upward recoil of pulverised soil in reaction (by Newton’s 3rd law) to the downward push of the explosion (the energy required to do this excavation depends on gravitation, since it takes energy MgD to raise mass M a distance D upward against gravity acceleration g).
Russian near surface burst nuclear test cratering data (update of 13 May 2007):
The crater depth is defined as the final pit depth measured not from the top of the crater lip, but from the undisturbed surrounding ground. Likewise the crater radius is defined not as the radius to the top of the lip, but merely as the radius to in the undisturbed ground. For the Australian-British 1.5-kt Buffalo-2 nuclear surface burst in dry soil at Maralinga in 1956, the crater lip height was 0.2D where D was crater depth, the radius of the crater lip crest was 1.25R where R was the crater radius, and the radius of the ground rupture zone was 1.4R (these data are taken from U.K. test report AWRE-T37/57, 1957).
The following table contains crater data for three near surface bursts of low yield. These fission weapons, with yields of 0.5-1.5 kilotons, were all of low X-ray emission, which means they produced twice the crater radii and depth that would occur if they had the usual X-ray emission of large warheads (which is about 80%). The lower the X-ray emission, the greater is the energy retained by the bomb casing. The case shock has high density, so it ploughs itself deeply into the ground and efficiently delivers kinetic energy for crater formation (X-rays merely heat up the surface, and any physical push is created by the recoil from surface ablation, which is feeble for crater production, as is the recoil due to the reflection of the air blast wave).
These data when corrected for burst height to a true surface burst and corrected by the cube-root law to 1-kiloton yield (cube-root scaling is valid below 2-kt), suggest that for such low X-ray weapons, the crater size for a 1-kt surface burst on dry soil is R = 18.37 m, D = 9.784 m.
(Noted added 13 May 2007: attention should be given to including Russian nuclear test data for surface bursts - see table already given earlier in this post - in this analysis, to increase accuracy.)
It would be useful to have some exact figure showing how much energy is used to produce the crater in these tests. Careful measurements were made of blast and thermal radiation at surface bursts, and these give approximate figures. The blast wave and thermal radiation energy is reduced significantly in low-yield surface bursts. In the Australian-British nuclear tests at Maralinga in 1956 (Operation Buffalo), the first shot (a 15-kt tower burst which produced an insignificant crater effect) has a measured blast yield of 7.7-kt of TNT equivalent, or 51% of the total yield, but the second shot (a 1.5-kt surface burst which produced a deep crater) had a measured blast yield of 0.46-kt of TNT equivalent, or 31% of the total yield. The difference is smaller for higher yield detonations. Computer simulations of crater formation indicated that in the 0.50-kt 1962 Nevada surface burst, Johnnie Boy, some 30% of the total kinetic energy of the explosion must have been used in crater formation and ground shock, as compared to only 3.75% in megaton surface bursts. For comparison, 67% of the energy of an iron meteor, striking dry soil at 20 m/s and normal incidence (90 degrees), becomes ground shock and crater formation.
In the case of the 9 Mt missile warheads stockpiled in America to destroy Moscow’s bunkers in a nuclear war, in the mid 1980s it was suddenly realised that their cratering radius was only a small fraction of what had previously been believed. The political response by President Reagan officially was to cover this up, keeping news of it from leaking to Moscow, and to press on with arms reduction talks. The Soviet Union collapsed before they were aware of the impotence of American power for destroying the Soviet command centres in a nuclear war! (Soviet evaluation of nuclear test effects was even worse than American efforts! The Soviets could not even work out how to make a camera photograph the EMP on an oscilloscope without the dot saturating the film, which the Americans did by a circuit to keep the dot off-screen until just before detonation. Soviet 1962 ‘measurements’ of EMP thus relied on the distance sparks would jump, the rating of the fuses blown by current surges, and electric fires in power stations! As far as cratering goes, all of the Russian surface bursts were of kiloton-range yield, and not a single one had a megaton yield. At least America had some data for megaton shots on coral. The big Russian tests, up to 50 megatons, were air burst and produced no crater.)
Oleg Penkovskiy, the famed spy, in 1965 betrayed the Russian secret underground command centre in the Ural Mountain range to America, but that is built under tundra. With missile delivery times falling and the chance of a sudden war increasing, the Russians also had a World War II shelter under a location near Kuybishev, and there is a later one at Ramenki, but the leaders would not have time to reach such shelters from Moscow. So they then dug a very deep shelter with tunnels linked under the Kremlin in Moscow. When it was completed in 1982, the project manager (former general secretary Chernenko) was awarded the Lenin Prize! The shelter is 200-300 metres underground with the well protected floors at the lowest levels and accommodates up to 10,000 key personnel. A 9-megaton surface burst causes severe underground destruction at 1.5 crater radii; for the ‘wet soft rock’ geological environment of the Moscow basin, this is 1.5 x 120 = 180 metres. You can see the problem! Even the biggest American warheads, 9-megatons, carried by the tremendous Titan missiles, could not seriously threaten Russian leadership in a war, because the Russian shelters were then simply too deep. Nuclear horror tales are just bunk. The duration and penetrating power of the heat flash and fallout radiation are also media-exaggerated.
Severe damage to missile silos occurs at 1.25 crater radii (rupture); severe damage to underground shelters occurs at 1.5 crater radii (collapse)
The effects from nuclear weapons that are ‘scary’ – in that they cover the widest areas – are all easily mitigated effects, like flying glass (don’t watch the fireball from behind a window), heat flash (again, look away, or better, ‘duck and cover’ under a table or just lie face down facing away to avert burns to exposed face and hands as well as glass fragments; dark clothes take time to ignite and someone lying down can put out any ignition after the flash simply by rolling over), and fallout (intense fission product radiation is due to fast decay, so it doesn’t last long, the mixture decays faster than 1/time, and at 2 days it is on average just 1% of the level at 1 hour; most of it is stopped by brick buildings). As the secret photos of fallout covered trays from the 3.53 megaton 1956 Zuni test at Bikini Atoll show (see Dr Terry Triffet and Philip D. LaRiviere, Characterisation of Fallout, WT-1317, 1961, for long classified ‘Secret – Restricted Data’, but now available), the fallout in a significant danger areas is clearly visible deposit of fused sand and not a mysterious death ray gas, you get hundreds of sand-like grains per square centimetre in lethal fallout areas where cover is necessary, but it is not so heavy that you’ll see the Statue of Liberty half covered by fallout, as in ‘Planet of the Apes’. It is true that a thunderstorm after an air burst can produce rainout, but that just goes down the drain, carrying the tiny air burst particles with it, and drains are deep enough to shield the gamma radiation! Triffet and LaRiviere also point out that a dirty bomb with U-238 in its casing produces a lot of Np-239 and related neutron capture products which predominate over most fission products for a week or two, but emit very easily shielded, low-energy gamma rays. Therefore you don’t need sophisticated shelters to screen most of the radiation. The sand-like fallout doesn’t diffuse like a gas, either. G. G. Stokes found that for a spherical particle of radius r moving at speed v through air of viscosity m , the drag force is F = 6pmrv, which allows the fallout times to be calculated.
The ‘Force of sound’
The sound wave is longitudinal and has pressure variations. Half a cycle is compression (overpressure) and the other half cycle of a sound wave is underpressure (below ambient pressure). When a spherical sound wave goes outward, it exerts outward pressure which pushes on you eardrum to make the noises you hear. Therefore the sound wave has outward force F = PA where P is the sound wave pressure and A is the area it acts on. When you read Raleigh’s textbook on ‘sound physics’ (or whatever dubious title it has), you see the fool fits a wave equation from transverse water waves to longitudinal waves, without noting that he is creating particle-wave duality by using a wave equation to describe the gross behaviour of air molecules (particles). Classical physics thus has even more wrong with it becauss of mathematical fudges than modern physics, but the point I’m making here is that sound has an outward force and an equal and opposite inward force following this. It is this oscillation which allows the sound wave to propagate instead of just dispersing like air blown out of your mouth.
Note the outward force and equal and opposite inward force. This is Newton’s 3rd law. The same happens in explosions, except the outward force is then a short tall spike (due to air piling up against the discontinuity and going supersonic), while the inward force is a longer but lower pressure. A nuclear implosion bomb relies upon Newton’s 3rd law for TNT surrounding a plutonium core to compress the plutonium. The same effect in the Higgs field surrounding outward going quarks produces an inward force which gives gravity, including the compression of the earth's radius (1/3)MG/c2 = 1.5 mm (the contraction term effect in general relativity).
Why not fit a wave equation to the group behaviour of particles (molecules in air) and talk sound waves? Far easier than dealing with the fact that the sound wave has an outward pressure phase followed by an equal under-pressure phase, giving an outward force and equal-and-opposite inward reaction which allows music to propagate. Nobody hears any music, so why should they worry about the physics? Certainly they can't hear any explosions where the outward force has an equal and opposite reaction, too, which in the case of the big bang tells us gravity.
Thanks for this post! It always amazes me to see how waves interact. You'd intuitively expect two waves colliding to destroy each other, but instead they add together briefly while they superimpose, then emerge from the interaction as if nothing has happened.
Dr Dave S. Walton tried it with logical signals (TEM - trabsverse electromagnetic) waves carried by a power transmission line like a piece of flex. Logic signals were sent in opposite directions through the same transmission line.
They behaved just like water surface waves. What's interesting is that when they overlapped, there was no electric drift current because there was (during the overlap) no gradient of electric field to cause electrons to drift. As a result, the average resistance decreased! (Resistance only occurs when you are having to do work by accelerating electrons against resistance from collisions with atoms.)
Another example is the reflection of a weak shock wave when it hits a surface. The reflected pressure is double the incident pressure, because the leading edge of the shock wave collides with itself at the instant it begins to reflect, at doubling the pressure like the superposition of two similar waves travelling in opposite directions as they pass through one another. With strong shock waves, you get more than a doubling of pressure because there is significant dynamic or wind pressure in strong shocks (q = 0.5*Rho*u^2 where Rho is density and u is the particle velocity in the shock wave) and this gets stopped by a reflecting surface, and the energy is converted into additional reflected overpressure.
Dr William G. Penney used "kT" in his article on the nuclear explosive yields at Hiroshima and Nagasaki, Proc. Roy. Soc. London, 1970. Penney's paper is cited in Glasstone & Dolan (ENW 1977), although they only use it for the source of the yields of Hiroshima and Nagasaki. Penney had issues with the 1962/4 edition of Glasstone, and these are ignored. The British manual "Nuclear Weapons" (H.M. Stationery Office, 1974) uses "KT", but most sources use "kt". Incidentally, Penney reproduces British nuclear test data and disputes the blast wave height-of-burst curves. Penney found that the 'peaking' effect in the Mach region for air bursts is due to the heating of the air just above the ground by the heat flash, and almost disappears if you measure the blast with sensors on poles 3 m high. Penney also discredits Glasstone's dismissal of blast damage in reducing the blast pressure. Accurate data on the crushing of empty petrol cans at Hiroshima by the blast showed that the overpressure decreased due to damage done to wooden houses. (You can't cause mass destruction without using up a lot of energy, which causes an irreversible loss of blast pressure with distance.) In a megaton detonation over a brick or concrete built city the loss of energy would reduce pressure ranges dramatically as the blast diverges outwards. All the American data comes from tests in unobstructed deserts or Pacific atolls. I discussed this by email with Dr Hal Brode, who did the original RAND Corp computer calculations of blast waves. His first response was the standard idea that the blast doesn't necessarily lose energy by doing work (causing destruction), since the debris will pick up some of the energy and carry it outward as flying bricks, panels and glass. However it is clear that the blast loses energy by the work done in breaking walls, which is irreversibly lost in warming up the rubble. If each house destroyed takes 1 % of the blast energy, then the energy after destroying 200 houses on a radial line outward from the explosion is down to just 100(0.99^200) = 13 % of what it would be over desert. This is valid for wood-frame houses. Brick and concrete buildings absorb far more energy per building destroyed, so in a modern city the blast pressure would fall very rapidly indeed. This is non-scalable, so it is most pronounced at high yields with large destruction radii computed for open terrain. Brode did concede, when presented with Penney's data, that this effect is not taken into account in American blast calculations at present. See http://glasstone.blogspot.com for further data. - Nigel Cook (edit by User:217.137.87.10)
The blast energy which diffracts back in is the incident blast energy minus the energy lost in causing destruction. The blast wave is always diverging, which is one of the reasons for the fall in overpressure with distance. Any sideways (non-radial) flow of energy to fill in areas where houses have been destroyed, reduces the energy somewhere else. You can't get something for nothing. If you have read the declassified book which is 1317 pages long, "Capabilities of Nuclear Weapons" by Philip J. Dolan of SRI, report DNA-EM-1 (Defence Nuclear Agency's Effects Manual number 1), you will see that this applies to forests. The blast diffracts around the tree trunks and fills in again afterwards. This was observed in forest stands at various tests, where the blast overpressure was measured on each side and found to be similar.
The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’
A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after an average of 200 houses in any radial line from ground zero outwards are destroyed, 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.
-Nigel Cook 26 Dec 05
The easiest way to deal with it is by energy use by the blast. The work energy used in pushing a wall distance x with force F is E = xF. Blast waves do diffract, but this doesn't violate conservation of energy. The problem with Glasstone and Dolan 1957-77 is that the book tries to dismiss the differences between a concrete city and a desert, without evidence. It is a cut down version of DNA-EM-1 which does contain sources. When you recognise that it was only in 1986 that they realised that gravity limits crater sizes [1] in the megaton range to 1/4 power scaling (instead of 0.3 power scaling), you get an idea of the bureaucracy of the U.S. Government nuclear effects calculation business. The secrecy prevents a wide range of critical assessment, so fundamental new ideas are ignored, and errors can persist for decades. 172.189.174.108 14:49, 13 January 2006 (UTC)
"The energy loss per square metre of diverging blast front is small for each building, 1% loss for destroying a wood frame house. So the blast reduction is only important for cities, not for isolated buildings on a desert.The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’
"A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99^200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am." - http://glasstone.blogspot.com172.201.72.197 13:34, 30 January 2006 (UTC)
Simpler discussion of the theoretical basis for the E^0.25 scaling law for crater dimensions at large yields:
The standard unclassified work on the effects of nuclear weapons is Glasstone and Dolan, U.S. Dept. of Defence, 1977. That book states that crater radii for nuclear tests of bombs burst on ground level in the same type of soil, say Nevada sand, are proportional to E^0.3 where E is the energy release in the explosion.
The 0.3 is an empirical factor, not based on theory. Unfortunately, it's wrong, as was discovered and published in a semi-secret paper in 1987 by the U.S. Department of Defense (the Glasstone and Dolan book was never published). It turns out that all the data used for the E^0.3 scaling law comes from Nevada tests of 1-100 kilotons, and has a fair amount of scatter.
Physical theory shows that for big yields, enormous amounts of soil are lofted from inside the crater up to the rim and ejecta on the surrounding terrain, and the energy required to lift the stuff is E = mgh, where m is mass, g is acceleration due to gravity and h is the average height the material is raised (about half the depth of the crater). The crater mass m equals the soil density times the crater size, which is proportional to the cube of the crater radius in surface bursts. Since the depth to radius ratio is approximately constant, the crater height h is proportional to radius, so the energy used in cratering, E = mgh = (aR^3)g(bR) = abgR^4, where a and b are constants. This tells you that the crater radius for big craters (where work done against gravity is the overriding use of energy in cratering), E is proportional to R^4, so crater radius R is proportional to E^(1/4) or E^0.25.
So it turns out that theory shows that at large yields, crater sizes are proportional to to E^0.25, not E^0.3.
The theory is predictive, because if you know the fraction of bomb energy absorbed in the ground, you can work predict the crater size accurately from the physical theory: you know how much energy is used to eject mass from the ground and that, together with the density of sand, the crater shape and the acceleration due to gravity, enable you to predict theoretically the crater size. (The fraction of energy used in cratering is deduced by the fact in a surface burst the effective blast energy yield of the bomb is found to be 1.6 times that of a free air burst of the same total energy release, rather than twice those of a free air burst as you'd expect if the ground was a perfect reflector with the pressures from the downward shock hemisphere being reflected up and merging to form a single powerful blast hemisphere in the air; the lost energy is that which digs the hole in the ground and causes ground shock. Ideally, you should also include an analysis of how much thermal energy is converted into cratering, by subtracting the thermal yield of a surface burst, typically 15-20%, from the thermal yield of a free air burst, 35-40%, and allowing for the proportion of the thermal energy used to melt soil into spherical fallout particles of fused silica or whatever. Dr Carl F. Miller calculated in his 1963 Stanford Research Institute report, “Fallout and Radiological Countermeasures” volume 1, that the portion of bomb energy used to fuse sand into glassy fallout spheres in a Nevada surface burst ranges from 7.5% for a 1 kt bomb to 9.2% for a 100 Mt bomb.) You can then check the theoretical predictions against the 1-100 kt Nevada craters from 1950s nuclear tests.
Just a small warning: some of the material and formulae in this post may contain errors, since it was taken from a draft journal manuscript and I don't know whether the units were consistently converted from pressure in psi to kPa and from feet to metres, calories/kt to J/kt or whatever.
Readers should check formulae for typing errors in any case, for instance by comparing to the blast pressure curves from nuclear weapon tests.
I will produce a revised blog post, or possibly a page uploaded to the domain http://quantumfieldtheory.org, to quantitatively analyse all nuclear effects data. (This blogger system is terrible to use for equations since you need to type the mark-ups for superscript and Greek symbols manually using html.)
In the meantime, two updates of vital historical importance:
(1) Quotation from:
Harold L. Brode and R. L. Bjork, "Cratering from a megaton surface burst", RAND Corporation, Santa Monica, California, report RM-2600, 1960:
"Calculations on the cratering and ground motion in a rock medium due to a two-megaton surface burst. The theoretical approach assumes a two-dimensional hydrodynamic model, and it is used to determine the motions involved in the cratering from a large-yield surface burst. Thetechnique is found to work well and to check with experimental observations. It is shown that the primary cause of cratering for such an explosion is not "airslap," as previously suggested, but rather the direct action of the energitic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. The cratering action and ground shock from large-yield explosions is of primary importance to problems of hardening military installations as well as to the peaceful use of nuclear explosions."
(2) Harold L. Brode's excellent 53 pages long paper, "Fireball Phenomenology" (RAND Corporation, paper P-3026, 1964) is now available to downoad freely from RAND Corporation as a 1.2 MB PDF document:
Some of the charts from this report were included in Dr Brode's article, "Review of Nuclear Weapons Effects", published in the 1968 Annual Review of Nuclear Science, volume 18, pages 153-202.
However, this report includes more detail specifically on fireball scaling laws derived from detailed numerical simulations of fireballs at various altitudes and for yields of 1.7 kt to 4 Mt. It also provides extra charts and illustrations.
More detailed data on blast wave pressure decay rates and related details for free air bursts are available in the report
How did you get the 1-5% energy absorption figure for each house being destroyed. Did you use data provided by Penney or were the calculations made by you.
Your blog is pure quality, thank you for creating it.
Thanks, however this blog has many limitations and has been put together too quickly in odd spare moments. I'm going to try to build something much better when time permits, systematically going through all the effects of nuclear weapons, reviewing the details and compiling the best information. I've got a large amount of information beyond what is on this blog (which is mainly concerned with the more "controversial" - actually factually-proved-but-politically-inexpedient - aspects of the many problems).
The 1-5% figures is the range I computed from detailed analysis of the effects on houses, and which is substantiated by Penney's research.
For typical Japanese wood-frame houses, which were the predominant building type in Hiroshima and Nagasaki prior to the nuclear attacks, the fall in overpressure is about 1% per house on an radial line. Since the distribution of the houses is known from aerial photographs taken by the 509ths prior to the attacks, the data in Penney's report which gives the accurately measured blast overpressure at various distances from the distortion of overpressure-sensitive targets like petrol cans, blueprint containers, etc., can be compared to peak overpressure for ideal blast waves over unobstructed desert terrain or desert, from nuclear tests.
The percentage of the blast energy absorbed per house encountered on any radial line from the bomb is also computable using the structural displacement due to the blast wave. Glasstone and Dolan provide a simple way of analysing the net pressure acting on a building as the blast wave diffracts around it.
Basically, the overpressure only produces a net force on the building as a while during the time taken for the shock front to travel the length of the building. Since the shock front is moving at supersonic velocity, this "diffraction loading" force acts for typically 0.1 second for a building 75 feet long. After that time, the overpressure equalises on all sides, and the building is simply crushed rather than pushed over.
Another effect is the wind drag loading, which continues for the entire duration of the positive phase of the dynamic pressure. This is of course very important for long-duration blast waves, or when the air is filled with hot dust (giving a sandstorm effect) as occurs if there is a precursored blast wave.
By calculating the overall force loading and the response of a building to that loading, the energy absorbed by the building from the blast is easily computed.
The basic law is that the work energy E done by a force F in causing a motion along distance X in the direction of the force (i.e. the radial direction) is:
E = FX
Dr Harold Brode (formerly of RAND Corp., R&D associates, etc.) made an argument to me by email that the energy which is absorbed from the blast wave in the act of causing damage is not really lost because it just gets converted into the directed kinetic energy of debris from the building, and the debris proceeds to move downrange.
This argument of his is flawed in a major way, because the velocity of the debris is much less than that of the shock wave, and in any case the debris from a destroyed building gets decelerated as it bounces along the ground.
In addition, buildings are going to be shaken and thus absorb energy from the shock front even at pressures far lower than those which will destroy a building.
But one advantage of Dr Brode's comment is that you can look at it as a simple way to calculate the energy depletion: the kinetic energy which is gained by the debris of a house is the minimum amount of blast energy which is lost through the work done in destroying the house.
Obviously, when a house gets destroyed not all the energy lost goes into the debris. A lot is used to do mechanical work in bending and snapping beams, joints, bricks, cement, etc., which ends up getting degraded into thermal energy without anything gaining a significant outward velocity. But there are quite a lot of studies of how fast debris moves on average for given pressures of blast wave.
One very simple example is study of human dummies exposed to a blast wave. When the dummies are accelerated and thrown downrange by the blast wave, they deplete some energy from the blast wave, which is turned into the kinetic energy of the dummy:
‘We were fortunate enough at a 5 psi station in one of the 1957 shots in Nevada to photograph the time-displacement history of a 160-pound [standing] dummy, and we were able from analysis of the movies to determine the maximal velocity reached ... about 21 feet per second. This velocity developed in 0.5 second. The total displacement of the dummy was near 22 feet ... It was this piece of empirical information that helped greatly in getting an analytical “handle” on the “treatment” of man as missile.’
– Dr Clayton S. White, who worked on nuclear weapon blast effects at Nevada test series’ Upshot-Knothole (1953), Teapot (1955) and Plumbbob (1957), Testimony to the U.S. Congressional Hearings, 22-26 June 1959, Biological and Environmental Effects of Nuclear War, U.S. Government Printing Office, 1959, pp. 364-5.
In this example, a 72.5 kg dummy exposed to a blast wave with a peak overpressure of 5 psi was accelerated to a peak velocity of 6.4 m/s. The energy lost from the blast wave by this one human being was:
E = (1/2)mv^2 = 1500 Joules
lost from the blast wave.
Notice that the person (representative of a large missile) doesn't fly downwind at supersonic velocity, but thuds to the ground after a displacement of 6.7 metres. The kinetic energy then gets converted into mechanical energy in damaging the dummy, instead of getting converted back into blast energy. Similarly when the roof or wall of a building gets blasted off, it thuds to the ground some distance downrange, and the impact causes it to break up. The energy isn't magically returned to the blast wave.
When a lot of big buildings get smashed up by the blast, substantial amounts of energy are lost.
The calculations I did gave a range of 1% loss per wood-frame house along a radial line from the bomb, to 5% loss per brick or masonry building. The 1% wood-frame building figure is empirically justified by the data from Hiroshima and Nagasaki.
The result is that blast damage ranges in cities are far smaller than predicted from cube-root scaling based on unobstructed desert and ocean pressure data, particular for higher yield weapons where predicted damage distances are great (covering large residential areas).
I have a detailed study on this problem, with a break down of figures for different types of housing and also an analysis of how the energy loss varies as a function of incident overpressure (this varies for different types of buildings, but it's not a bad approximation to treat the percentage loss as a constant regardless of incident overpressure).
The person at fault here is Samuel Glasstone himself, it seems. He edited out several vital bits of the September 1950 "Effects of Atomic Weapons" (of which he was executive editor on an editorial board which had as its chairman Joseph O. Hirschfelder, David B. Parker, Arnold Kramish and Ralph Carlisle Smith) which stated on page 56 (in a section based on work done by John von Neumann and Frederick Reines of Los Alamos):
[Paragraph 3.20] "... As to the detailed description of the target, not only are the structures of odd shape, but they have the additional complicating property of not being rigid. This means that they do not merely deflect the shock wave, but they also absorb energy from it at each reflection.
[Paragraph 3.21] "The removal of energy from the blast in this manner decreases the shock pressure at any given distance from the point of detonation to a value somewhat below that which it would have in the absence of dissipative objects, such as buildings. The presence of such dissipation or diffraction makes it necessary to consider somewhat higher values of the pressure than would be required to produce a desired effect if there were only one structure set by itself on a rigid plane."
Glasstone apparently edited out that section from further versions of the book (such as the 1957 renamed "Effects of Nuclear Weapons") because it contradicted the oversimplified statement on page 137 of the 1950 "Effects of Atomic Weapons", which vaguely claimed that:
"The general experience in Japan provides support for the view ... that the effect of one building in shielding another from blast damage due to an atomic bomb would be small."
Yes, it's about 1% for Japan, but that's missing the whole point!
After the blast covers a radial line through 100 buildings, the cumulative 1% losses amount to a very big loss: (1 - 0.01)^100 = 0.366. Hence the peak overpressure is down by a factor of 2.7 after the blast wave has knocked down 100 wooden houses in a straight line.
By just comparing one house with its neighbour, of course you don't see any difference because the difference is only 1%.
Glasstone probably oversimplified it in later editions because he simply didn't think it through and realise that the effect of summing a lot of small % energy absorptions is cumulative and adds up to a substantial reduction in overpressure at great distances in a build up area.
By focussing on the tiny difference between one building and the next, nothing was observed because the 1% depletion was statistically undetectable in the somewhat chaotic damage effects.
It wasn't until Penney's analysis in 1970, two decades later, that evidence emerged that cumulative depletion of blast energy along a radial line from ground zero made substantial reductions in overpressure and damage at great distances, compared to those predicted from 1950s test data based on unobstructed terrain in deserts and over oceans.
After re-reading this post on 31 May 2008, I want to emphasise that the net outward force effect from air blast is the DYNAMIC PRESSURE of the blast wave (which is a vector because it is directional - blowing radially with zero non-radial pressure) multiplied by the spherical surface area of the blast wave.
The normal overpressure is better called the "non-directional overpressure" or non-dynamic pressure. It is a pressure which acts in all directions (basically like a charge in air pressure).
What we are concerned with when calculating the net outward force of a blast wave is the wind or dynamic pressure, which blows in the radial direction.
Consider two 35mt bursts (planned warhead for Titan II) -1 air(to maximize 15 psi overpressure) and 1-ground in Moscow ,how much would be damage by blast and fire ?
Surely the Titan II warhead was the roughly 9 Mt bomb tested as 8.9 Mt Hardtack-Oak in 1958?
I don't see how you could have put an extremely heavy 35 Mt warhead on a Titan II missile without exceeding the payload? The missile would have had to be considerably larger to take a 20 tons or more massive warhead, and it was already the size of a small space rocket!
The effects of blast and heat - in the open the 50% lethal range at Hiroshima was 1.3 miles, compared to 0.12 mile in the ground floor of modern concrete buildings.
Scaling up this data to 35 megatons by the cube-root law (for diffraction damage and blast induced fires) gives (35,000/15)^{2/3} = 13-fold increase to 1.6 miles for 50% mortality in concrete buildings and 21 miles for people outdoors or in flimsy imflammable Hiroshima wooden houses full of bamboo furnishings, paper screens and easily-blast-overturned charcoal braziers which were cooking breakfast at 8:15am in Hiroshima.
The 21 mile range would probably be reduced substantially by the cumulative energy loss of the blast in destroying successive wooden houses, but the 1.6 miles figure for people in concrete buildings is more relevant for a 35 Mt air burst over modern city buildings. The 50% lethal range for a ground surface burst would be less than 1.6 miles.
This system terrible.My comment far exceed limited volume.So I devided my comment to several parts.
Part1.
Thank you.But,9-megaton yield for MK-53(not B-53 or W-53) was based on Hansen book,he assumed that Oak device was tested on full yield,but i think it actually tested at half yield.Space rockets were actually considerated as ICBMs.Hansen give false yields for Mk-21(4mt,but this warhead must have 14-15mt,because Mk-36 had 19 megatons,MK-21 tested in clean configuration at 4.5 mt,but that was only 1/3 at full yield,and clean version of Mk-36 had a yield of 6mt(this version actually was build(converted ) and stockpiled in very small numbers,but never deployed).Given,that 4.5/6*19=14.25mt.Mk36 was improved version of Mk21,built for military requirements for 20Mt for cratering runways with 50% probability to produce 50%damage.
Sources for that data :
Document 2: "Report of NSC Ad Hoc Working Group on the Technical Feasibility of a Cessation of Nuclear Testing," 27 March 1958. Hans Bethe chairman.That second declassification.It can be found at national security archive,Washinghton University Library,archive-Nuclear Vault.Section-The Making of Limited Test Ban treaty. http://www.gwu.edu/~nsarchive/NSAEBB/NSAEBB94/tb02.pdf. and Letter from Captain John H. Morse, Special Assistant to the Chairman, Atomic Energy Commission, to Lewis Strauss, Chairman, Atomic Energy Commission, 14 February 1957, Secret.At same archive,but in section :It Is Certain There Will be Many Firestorms" (1)
New Evidence on the Origins of Overkill
National Security Archive Electronic Briefing Book No. 108.
Part2. Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info: "By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt. For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue". Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm. There also some very nice documents. 60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D. 100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as ICBMs.
Part2. Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info: "By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt. For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue". Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm. There also some very nice documents. 60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D. 100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as ICBMs.
I bought and read Hansen's "U. S. Nuclear Weapons" 1988, and it is full of errors. He mixes up facts and make believe.
Some of the errors which annoyed me the most were the errors in the data he gives from a preliminary document for the percentage of early fallout at the Redwing tests Zuni, Tewa, Flathead and Navajo (although he very usefully gave the correct percentage fission yields for those tests, 15, 87, 73 and 5% respectively), where he states that the water surface bursts deposited about 30% of their fallout in local fallout while for the land surface bursts it was 48-50%. These percentages of local fallout were debunked in the testimony by Dr Kellogg in the RAND Corp in the June 1957 congressional hearings "The Nature of Radioactive Fallout and Its Effects on Man": they were calculated using an incorrect conversion factor between deposited activity and dose rate. When corrected, the % in local fallout is much higher and more similar for both types of burst.
Other errors Hansen made were the Teller-Ulam mechanism, assuming that X-rays heat up plastic foam filling the radiation channel which then turns to plasma to compress the fusion stage capsule.
Actually, as Glasstone and Dolan's "Effects of Nuclear Weapons" stated since the 1962 edition, the X-rays coming off the primary stage have a very short mean free path and will be blocked. Filling the duct between outer casing and fusion capsule with plastic foam would prevent the H-bomb from working: it would stop the X-rays and turn that energy into a fireball which would diverge outward instead of being focussed inward upon the fusion fuel capsule. This would fail to cause efficient compression because it would turn the bomb into a "layer cake" that just pushes the fusion fuel away from the fission primary stage, instead of efficiently compressing it. Instead of plastic foam filling the X-ray duct, there is empty space allow the X-rays to be channeled effectively and ablate the fusion capsule surface, so that by recoil it gets compressed.
After interviews with the Ivy-Mike bomb designers, Richard Rhodes corrected the situation on page 486 of "Dark Sun" (Simon and Schuster, N. Y., 1996):
"The flux of soft X-rays from the primary would flow down the inside walls of the casing several microseconds ahead of the material shock wave from the primary. ... the steel [OUTER] casing would need to be lined with some material that would absorb the [soft X-ray] radiation and ionize to a hot plasma which could radiate X-rays [in a different direction, like a mirror] to implode the secondary."
So what the plastic foam does is act as a mirroring surface to reflect back X-rays going toward the outer casing, instead of losing that energy by having it ablate the outer casing. What you want to do is reflect those X-rays back on the the fusion fuel capsule in the middle of the radiation channel, so they ablate that, not to ablate the inside of the outer bomb casing! Rhodes on page 501 of "Dark Sun", quoting Mike designer Harold Agnew:
"I remember seeing the guys hammer the big, thick polyethene plastic pieces inside the casing ... They hammered the plastic into the lead with copper nails."
The plastic foam is just one inch thick and is purely a "radiation mirror" for the X-rays; reflecting as much X-ray energy back on to the fuel capsule as possible. The plastic foam doesn't fill the entire casing, it's just a relatively thin (1" thick) layer fixed to inside of the outer case. Rhodes however was still confused and reverts to Hansen's error on page 492, where he says that the plastic foam "would expand rapidly and deliver the necessary shock [to the fusion fuel capsule]". This is untrue; the physical expansion of plastic foam and its "shocking up" into a shock wave takes far longer and exerts far less pressure than the delivery of X-ray energy.
Plastic foam is vital to make the inside of the outer casing into a "radiation mirror" for X-rays. Instead of ablating a metal surface and wasting the energy by transforming it into mechanical kinetic energy of ablating metal vapor and recoil shock in the outer case, because of its low density (compared to a metal) the plastic foam simply heats up and re-radiates the energy it has absorbed as X-rays. This turns it into an excellent mirror for X-rays, since the incident X-ray energy is mostly re-radiated instead of being turned into mechanical shock wave.
To understand this mechanism in slightly different context, see Glasstone and Dolan, "The Effects of Nuclear Weapons" 3rd ed., 1977:
"Two factors affect the thermal energy radiated ... First ... a shock wave does not form so readily in the less dense air [or any less dense medium!]"
Plastic foam is able to mirror X-rays because it is able re-radiate X-ray energy efficiently: its low density slows down the rate of shock wave formation, eliminating that mechanism for energy loss, so plastic foam merely heats up and re-radiates the energy as X-rays.
(The plastic foam "mirroring" of X-ray radiation is vital to the Teller-Ulam design as evidenced by the declassified title of their 9 March 1951 joint Los Alamos LAMS-1225 paper: "On Heterocatalytic Detonations. I. Hydrodynamic Lenses and Radiation Mirrors".)
(The "radiation mirrors" concept is the Teller contribution: this is the key to the whole breakthrough; Ulam's hydrodynamic lenses never worked for the shock wave from the fission primary which is too dense and slow to focus. It is absurd that the one key breakthrough, Teller's radiation mirroring, is completely misunderstood by Rhodes and others, because they don't understand that the difference in density between plastic foam and metal reduces shock wave formation and thus makes plastic into a relatively good radiation mirror.)
Hansen also gives false descriptions of the Hiroshima and Nagasaki devices: the projectile in the gun-type Hiroshima device was a hollow cylinder of U235, not the other way around.
Richard Rhodes also seems to be totally ignorant of nuclear weapons effects where he claims on page 509 that after the Mike shot: "Radioactive mud fell out, followed by heavy rain."
This contradicts the thorough fallout collection data for Eniwetok lagoon in the weapon test report W. R. Heidt, Jr., E. A. Schuert, et al.; WT-615, "Nature, Intensity and. Distribution of Fallout from MIKE Shot", Project 5.4a., USNRDL, 1953. The fallout from Mike wasn't mud or heavy rain but fallout particles formed from coral grains.
On the same page, Rhodes falsely claims that the entire crater volume of 80,000,000 tons became global fallout, when in fact only about 1% was fallout and the explosion didn't have enough energy to lift that mass: Dr Alvin C. Graves testified to the 1957 U.S. Congressional Hearings on "The Nature of Radioactive Fallout and Its Effects on Man" part 1, page 71, that approximately "a megaton of energy will lift up a tenth of a megaton of dirt." Hence 10.4 Mt Mike lifted up just ONE million tons of fallout, not 80.
Rhodes on page 542 of "Dark Sun" reveals his complete ignorance of chemistry by claiming that the fallout was "calcium precipitated from vaporized coral". Duh! Did Rhodes ever go to elementary chemistry class and see what happens to a piece of calcium exposed to the air for a few seconds? It oxidizes into calcium oxide with the release of energy!
Even if he didn't know that, Rhodes should have studied the facts on the fallout collected from Mike in weapon test report WT-615 or the congressional testimony by Triffet in 1959: Mike never reduced a million tons of coral to calcium metal in the first place. Long before the time the fireball had expanded enough to engulf that much coral, its temperature was just enough to reduce some of the coral to calcium oxide, CaO, which was then slaked by atmospheric moisture during the many minutes or hours of the long fallout to give calcium hydroxide (slaked lime). This is why the fallout was an irritant, and led to confusion in AEC Chairman Lewis Strauss's statement after the Marshallese and Japanese were contaminated by Bravo in March 1954.
The AEC pointed out that skin irritation during fallout was a chemical effect of the lime in the fallout irritating skin and eyes, and stated on 11 March that the Marshallese had no beta radiation burns. The first beta burns appeared on 14 March, two weeks after exposure, as is usual for beta ray burns to skin. It made the 11 March statement look like a false statement or a cover-up.
Part3. Amazing,insane ,but 1000-megaton warhead was not largest ever considerated.In exellent book :Project Orion .The True story of the Atomic spaceship.George Dyson (Freeman son) referce to two weapons.(I might be confused,it might be same weapon).
1)Small:1650-ton continent -buster hanging over enemy head as detterent.Its yield must be approx.9 gigatons.
"A May 1959 Air Force briefing revealed some possible military uses of Orion vehicle ,including reconnaissance and early warning,electronic countermeasures,anti-ICBM,ICBM,orbitally or Deep space weapons.Finally ,there was The Horrible weapon-1,650-ton continent -buster hanging over enemy head as deterrent.”These proposals were for 4000-ton vehicle.
2)20,000 ton vehicle.This book and atomicrockets.com. One mission was considerated-ability for delivering a warhead so large that it would devastate a country one-third of size of the of United States.
Given that territory of US roughly 2000*4000km,and maximum distance for devastation by such weapons by thermal radiation,radius roughly 1000km,and yield must be- 50-60 GT,using formula 0.68*Y^0.4. So that weapon more powerful than all nuclear weapons ever built.This bomb was named a DOOMSDAY BOMB and project DOOMSDAY ORION. Given that was project Orion Battleship,"This one will take the form of a space battle. In 1962 President Kennedy was shown a model of the spaceship as a last-ditch effort to keep the project alive. This abominable concept was for a ship capable of wiping out every Russian city with a population over 200,000 from orbit. Sadly the model has now been lost. Descriptions of it say it was equipped with 5 inch guns for defense, Casaba-Howitzer bombs (a directed-energy nuke), and 500 Minuteman-style 25 megaton bombs. Kennedy, like the scientists involved and any sane person, hated the idea. One year later the Limited Test Ban Treaty was signed and the project was canceled". Given that each 25-megaton (MK-41) weighted 10500lbs(4750kg),missiles must weighted at least 20tonnes. I sure that DOOMSDAY BOMB and continent-buster different weapons. 9 gigaton blast could not a devastate a country with size of 1/3 of US,minimum 50-60Gt blast.
All these projects were not materialised due political,but not technical factors(especially due disastrous actions of Mcnamara). Some information yet. MK-41 was considerated to be missile warheads at least three times: 1)As alternate warhead for NAVAHO. 2)As warhead onbourd Orion battleship. 3)As single warhead for large Pluto (USAF ramjet design,also proposed armaments-2x32-inch warheads(10mt each),5-21-inch warheads (5mt each),and 16 -15 -inch warheads(1.5 mt each)).Source-Proceedings of Nuclear Propulsion Conference ,August 15-17,1962.Naval Postgraduate SCHOOL.Monterey,California.AEC.Division of Technical Information. Try find this report in internet. PLUTO weighted only 45,000 lbs and was designed for global strike. Another report for small Pluto design.UCRL-ID-125506. Weapon total load was 3000-4000lbs.It was consisted Optional configurations range from a single 32-inch diametr warhead,or pair of 21-inch diametr warheads,or as many as six l5-inch diameter ejectable weapons. In all cases total yield must be an order 1o megatons.
Part4. So I think Mk-53 yield biased understimated.There must be two possibilities. 1)Yield total was suppresed,but total fission fraction same as at 18mt(56%). 2)Fission fraction was suprassed from 70-85% to 55%. Total yield must be in 12-18mt range for 2 reasons : 1)MK-53 was a class B and must same Y/W as Class A(60 megatons),and Class B(25 megatons). Mk-36 previous generation and its Y/w ratio could not be applied to Mk-53 based on that documents. 2)CIA estimates for warheads on R-36(sS-9)was based on MK-53 and MK-41:light warhead was based on MK-53(RV-8000lbs and warhead-6500lbs,yiled-12-18mt,heavy warhead was based on Mk-41-RV weight-13000lbs,yield up to 25 megatons).
So i sure ,that Mk-53 had a yield of 12-18 megatons.Given that 6400lbs- 2896 kg,2896x5.3=15.348mt.Mk-53 intended have a yield of Mk-21 but in much smaller weight.
For 35mt warhead. This data on DOE site.Office of declasification.Drawing Back the Curtain of Secrecy Restricted Data Declassification Policy, 1946 to the Present RDD-1
June 1, 1994 U.S. Department of Energy, Office of Declassification.
Section D.Thermonuclear weapons.
9. The fact that the yield-to-weight ratios of the new class of weapons would be more than twice that which can now be achieved in the design of very high yield weapons using previously developed concepts. (63-1). 10. The United States, without further testing, can develop a warhead of 50-60 Mt for B-52 delivery." (63-3) 11. "... some improvement in high yield weapons design could be achieved and that new warheads -- for example a 35 Mt warhead for our Titan II -- based on these improvements could be stockpiled with confidence." (63-3). Another source for 35Mt warhead Mcnamara himself.Time.Atomic arsenal.23 august,1963.
McNamara, while admitting that the treaty, by barring atmospheric testing, would prevent the U.S. from developing a 100-megaton bomb, told the Senators that without any testing the U.S. "can develop a warhead with a yield of 50 to 60 megatons for a B-52 delivery," and with underground tests could develop "a 35-megaton warhead for Titan II." I think,that you,Nigel known about that weapon. Initial plan was for deploying-275 Titan II,2915 Minueman,including advanced model with 5mt warhead and 1319 Skybolt. So I think ,that 35mt was sucessor of Mk-53 and 55Mt warhead sucessor of Mk-41.These weapons have Y/W around 11kt/kg.There also consideration of Advanced Titan II with payload increased to 6t to carriage 55Mt warhead.See :Desmond Ball,Politics and force levels.1980.And refences therein.On Advanced Titan ,The Hickey Study,reference in this book.On 275 TitanII and 2915 Minuteman-Package Plans for Strategic Retaliatory forces ,3 July,1961. Imagine if Mcnamara never been secretary of defense and that plans were executed. USAF also studied thrust-increased Titan II ,capable carriyng a 100Mt(40000 lbs) warhead.
I be very happy for your coments on DOOMSAY BOMB,because my calculation based on Glasstone book.I think ,that 35mt warhead used a U-235 around a fusion fuel.
Insane is a good word to describe the 1650-ton, 9 gigaton doomsday machine put in orbit over an enemy using Project Orion's nuclear explosion powered spaceship.
Once you get to gigaton yields, the fire hazard cam become serious. Wood won't ignite directly for yields below 100 Mt because the flash is so brief it just ablates the 0.1 mm of outer surface into a shielding cloud of smoke, which prevents fire as seen in nuclear tests, so you need litter like dry leaves or newspaper to ignite as tinder, then that has to ignite some kind of kindling like cardboard or twigs, and then the kindling if below wood will start a fire: normally this chain is broken and you don't get fires except as in Hiroshima from blast overturned charcoal cooking braziers in paper screen and bamboo furnishing in wooden houses, or with WWII air raid "blackout curtains" (dark coloured curtains which absorb a lot more thermal energy than modern light coloured curtains).
But for gigaton bombs (thousands of megaton), the rate of thermal energy release is too low over large areas to ablate wood; instead the wood is slowly heated and may reach ignition temperature without the need for tinder and kindling in convenient proximity.
In this case, you do get widespread fire hazards, which is what probably caused a climate catastrophe which killed the large cold-blooded dinosaurs (but not smaller cold blooded relatives like tortoises, etc.).
I think that really is an insane kind of weapon, which is why Herman Kahn used such devices as examples of "doomsday" devices in On Thermonuclear War, where the theory of deterrence is applied too far (if something then goes wrong you are then in a real pickle, and make no mistake).
In my latest and possibly last blog post, I've quoted Dyson's book Disturbing the Universe where he was designing extra-clean (low fission yield percentage) bombs for Project Orion and ended up helping Samuel Cohen's neutron bomb project. Dyson is extremely simplistic in everything, although he gets a mature viewpoint to some extent by applying his simplistic analysis from more than one direction. He applies simplistic reasoning to both opposite viewpoints, and by combining the results manages often to get a reasoned evaluation of the clearest arguments on each side of an argument.
The best example is the contrast between Dyson's account of quantum mechanics in his Scientific American article of 1958, "Innovation in Physics", with the account of his arguments with Richard P. Feynman over Feynman's "path integrals" approach in his 1979 book "Disturbing the Universe". Dyson in the 1958 article says quantum mechanics is purely mathematical with nothing pictorial to understand; Dyson in the 1979 book says that in 1948 after Pocono he and Feynman argued about this with Feynman hitting back and saying Einstein's grand unified theory failed because it was just equations with no mechanistic pictorial physics to it (Feynman's famous "diagrams" of quantum field interactions between fundamental particles).
Project Orion was of course cancelled by President Kennedy after Dyson submitted crazy blueprints to Kennedy for a "Star Wars" battle cruiser spaceship. Kennedy was appalled, cancelled funding, and made sure Project Orion was buried by signing the Atmospheric Nuclear Test Ban Treaty to prevent it ever being tested with nuclear explosives. So NASA went the other way with the plaque on the Moon reading: "We came in Peace."
Maybe if Dyson and his comrades had a bit more insight marketing ideas, they wouldn't have tried to sell Orion to Kennedy as a space based warship, but just as a very cheap way to get to Mars, burning up some of the nuclear weapons stockpiles on the way.
Gigaton weapons effects (1000 megatons or more) differ from nuclear weapons below 100 Mt mainly in the thermal and fireball phenomena. At the upper limit of gigaton "doomsday" weapons yields, you get into the kind of global "nuclear winter" phenomena from the K-T impact 65 million years ago which ended the reign of large cold-blooded dinosaurs and gave warm-blooded mammals a chance.
The thermal radiation emission occurs so slowly from yields above a gigaton that it doesn't ablate the surface of wood into a fire-preventing "smoke screen" over large areas like a brief thermal pulse from below 100 Mt. Instead, above a gigaton, the thermal pulse is like a long pulse of extra-intense sunlight which can gradually warm up wood to depth (not just surface heating which causes ablation), and cause wood to ignite directly.
The fireball is also bigger than the 7 km scale height of the atmosphere so that you get massive differences in air density between the top and bottom, causing rapid "ballistic" fireball rise rather than the normal buoyant rise that you get from nuclear tests below 100 Mt at low altitudes.
The many gigatons of the K-T event did cause climatic effects that killed the large cold-blooded dinosaurs and many ocean species which were temperature-sensitive, but it didn't kill cold blooded smaller reptiles or mammals or many species which survived and evolved happily afterwards...
So I think even K-T impact events are exaggerated in their effects. There is evidence that all the large mammals today have evolved from smaller ones left after the K-T impact event. E.g., there were no large mammals 65 million years ago; all the surviving mammals were very small and have since evolved into larger sized mammals. However, the simple, very low technology techniques even mouse sized mammals used to survive the K-T impact could be employed by intelligent people to survive a gigaton explosion.
Like a half filled glass of water, the doom-mongers would view the K-T impact event as an example of the threat of extinction, as if the extinction of crazy big dinosaurs was a bad thing that extrapolates to human extinction threats. Others would see it differently and consider the survival of mammals under such circumstances as evidence of the difficulty in exterminating life and thus the survival possibilities for humans even in the worst events that have ever occurred in the history of the planet.
Because the earth rotates, any global smoke cloud that blocks sunlight and causes "nuclear winter" will be unevenly heated from this factor alone: the sunset and sunrise effects will cause expansion of air and thus winds to unevenly disperse smoke, allowing natural convection to occur, so that rain can be generated.
The burning of vegetation is accompanied by the emission not just of soot and CO_2 but also of water since the mass of most vegetation contains a lot of water, so you will get self-induced rainout when the soot and water vapour rise to high altitudes when the soot absorbs vapour and forms large "black rain" droplets which settle out under gravity, like the black rain at Hiroshima (this is something the doom-mongers are quiet about in the climatic effects context, although happy to hype in false radioactive hazard context; ignoring the low specific radioactivity content of the rain since the radioactive mushroom cloud was blown miles away from the target area half an hour or more before the firestorm even started).
Wind and rainfall will thus disperse and precipitate most of the soot within a week or two. That's a long-enough "winter" spell to kill large cold-blooded dinosaurs which can't take shelter because of their size and which can't metabolize food in low temepratures, so they just come to a half and die; but it's not enough to kill many species which can respond better to low temperatures.
But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?I think that 9 gigatons not enough.Doomsday bomb and coninent-buster must be different weapons.Various orions,lagest was 8000,000 ton in weight.Both military and civilian applications.
Apart Orion Mcnamara killed various defensive and offensive systems:Pluto,Dynasoar,B-70,WS-125A,Skybolt,AICBM,Advanced Minuteman,restricted depolyement of strategic forces,F-108,F-12,MK-16(MIRV for Titan II),Sentinel,Bambi and etc,and etc.
"But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?"
Whether the bomb is a space burst or a low altitude burst, at that yield the fireball exceeds the scale height of the atmosphere (7 km) by a large factor so the fireball undergoes ballistic rise as described by Dolan in ENW and CNW. It goes up very quickly, and it radiates for a long time, so it basically radiates from extremely high altitude. It could certainly expose very large areas, although if there were heavy cloud cover between the fireball and the ground, that would mitigate the thermal effects. Air blast would have a very long duration at such yields, but even so it wouldn't be that impressive for a high altitude or space burst owing to the low density of the air: thermal radiation would be the primary effect carrying most of the energy.
If you are asking for the yield needed to "devastate" such an area, you need to define the type of burst (e.g. altitude of burst) and what you mean by "devastate", e.g. what the target is (wood frame Japanese houses with blackout curtains in the windows etc., the flammable 1953 "Encore" nuclear test house full of newspapers with a big window facing ground zero with an unobstructed line-of-sight view, or modern steel and concrete city buildings?).
Many media people and politicians would say that a 1 kt nuclear explosion anywhere in America would "devastate" just about the whole country financially and by fallout contamination, citing the number of hospital beds in the USA compared to the maximum possible numbers of burns casualties, the expense of 100% effective decontamination of large fallout areas, and so on.
Very long thermal pulses result from gigaton yields at high altitude. This means
(1) It becomes possible for the heat pulse to actually cause solid wood to heat up into its depth so it can ignite eventually where the yield is high enough (instead of having merely the outer tenth of a millimetre "blown off" as smoke, without fire).
(2) The long duration of thermal energy delivery (minutes) gives people more time to take cover. Not "ducking and covering" becomes a non-option. Everyone has time to evade a large fraction of the thermal pulse if they have some non-flamable shelter available. Over the widest area (out to the horizon as seen from the edge of the high altitude X-ray "pancake" fireball), the thermal pulse is just like the sun but more intense. So people will be able to avoid injury by taking protective measures that they would take against sunburn, such as going indoors or behind anything that gives some shade.
The biggest nuclear weapon yield I have seen thermal ignition predictions published for is 1,000 Mt (1 gigaton) in volume 1 of Robert U. Ayres's Hudson Institute report HI-519-RR, Environmental Effects of Nuclear Weapons, Fig. 2.1, page 2-3. (The reason wny Ayres considered yields up to 1,000 Mt in this report was probably that the director of HI was Herman Kahn, who was interested in "doomsday devices".)
Ayres finds that 1,000 Mt detonated at 36.6 km altitude might produce 7 cal/cm^2 thermal flux at up to 265 km away on a clear day. However, on the previous page Ayres shows that the energy needed for ignition of newspaper increases with yield, from 7 cal/cm^2 at 1 Mt to 11 at 10 Mt and 25 at 100 Mt. Although ignition of wood is possible for gigaton yields due to the long duration of the heat pulse, you still need a lot of energy to achieve ignition temperatures, which limits the distance.
Document,that avaible stated that at this time (1957) time another Class C weapon was under study-18 megatons in 7,000-pound weight. AFSWC,Technical report on nuclear weapon development,1957.
"A May 1959 Air Force briefing revealed some "possible military uses of the Orion Vehicle," including reconnaissance and early-warning, electronic countermeasures ("possibility to get a terrific number of jammers over a given area"), anti-ICBM ("possibility of putting many eary intercept missiles in orbit awaiting use"), and "ICBM, orbital, or deep space weapons -- orders of magnitude increase in warhead weights -- clustered warheads -- launch platforms, etc." Finally, tere was "the Horrible weapon -- 1,650 -ton continent buster hanging over the enemy's head as a deterrent.
USAF Orion was a special model.
4,000-short gross weight.
250 feet in lengt and 85 in diametr.
ORION ICBM mean a ICBM with 2,000-s.ton throw-weight ,there would be bunch of devices around 1000mt.
Continent-buster=Doomsday bomb.
Weight 1650 short tons.Yield would be >20 ,000 megatons.
It would be exploded over USSr at 400 km altitutede literally turning USSR to Hiroshima.
How to achieve peace through tested, proved and practical declassified countermeasures against the effects of nuclear weapons, chemical weapons and conventional weapons. Credible deterrence through simple, effective protection against invasions and collateral damage. Discussions of the facts as opposed to inaccurate, misleading lies of the "disarm or be annihilated" political dogma variety. Hiroshima and Nagasaki anti-nuclear propaganda debunked by the hard facts. Walls not wars. Walls bring people together by stopping attacks by "divide and rule" style divisive terrorists, contrary to simplistic Vatican propaganda.
Historically, it has been proved that having weapons is not enough to guarantee a reasonable measure of safety from terrorism and rogue states; countermeasures are also needed, both to make any deterrent credible and to negate or at least mitigate the effects of a terrorist attack. Some people who wear seatbelts die in car crashes; some people who are taken to hospital in ambulances, even in peace-time, die. Sometimes, lifebelts and lifeboats cannot save lives at sea. This lack of a 100% success rate in saving lives doesn't disprove the value of everyday precautions or of hospitals and medicine. Hospitals don't lull motorists into a false sense of security, causing them to drive faster and cause more accidents. Like-minded ‘arguments’ against ABM and civil defense are similarly vacuous.
‘As long as the threat from Iran persists, we will go forward with a missile system that is cost-effective and proven. If the Iranian threat is eliminated, we will have a stronger basis for security, and the driving force for missile-defense construction in Europe will be removed.’
‘The [ABM] treaty was in 1972 ... The theory ... supporting the ABM treaty [which prohibits ABM, thus making nations vulnerable to terrorism] ... that it will prevent an arms race ... is perfect nonsense because we have had an arms race all the time we have had the ABM treaty, and we have seen the greatest increase in proliferation of nuclear weapons that we have ever had. ... So the ABM treaty preventing an arms race is total nonsense. ...
‘The Patriot was not a failure in the Gulf War - the Patriot was one of the things which defeated the Scud and in effect helped us win the Gulf War. One or two of the shots went astray but that is true of every weapon system that has ever been invented. ...
‘President Bush said that we were going ahead with the defensive system but we would make sure that nobody felt we had offensive intentions because we would accompany it by a unilateral reduction of our nuclear arsenal. It seems to me to be a rather clear statement that proceeding with the missile defence system would mean fewer arms of this kind.
‘You have had your arms race all the time that the ABM treaty was in effect and now you have an enormous accumulation and increase of nuclear weapons and that was your arms race promoted by the ABM treaty. Now if you abolish the ABM treaty you are not going to get another arms race - you have got the arms already there - and if you accompany the missile defence construction with the unilateral reduction of our own nuclear arsenal then it seems to me you are finally getting some kind of inducement to reduce these weapons.’
Before the ABM system is in place, and afterwards if ABM fails to be 100% effective in an attack, or is bypassed by terrorists using a bomb in a suitcase or in a ship, civil defense is required and can be effective at saving lives:
‘Paradoxically, the more damaging the effect, that is the farther out its lethality stretches, the more can be done about it, because in the last fall of its power it covers vast areas, where small mitigations will save very large numbers of people.’
‘The purpose of a book is to save people [the] time and effort of digging things out for themselves. ... we have tried to leave the reader with something tangible – what a certain number of calories, roentgens, etc., means in terms of an effect on the human being. ... we must think of the people we are writing for.’
“FY 1997 Plans: ... Provide text to update Glasstone's book, The Effects of Nuclear Weapons, the standard reference for nuclear weapons effects. ... Update the unclassified textbook entitled, The Effects of Nuclear Weapons. ... Continue revision of Glasstone's book, The Effects of Nuclear Weapons, the standard reference for nuclear weapons effects. ... FY1999 Plans ... Disseminate updated The Effects of Nuclear Weapons.”
‘During World War II many large cities in England, Germany, and Japan were subjected to terrific attacks by high-explosive and incendiary bombs. Yet, when proper steps had been taken for the protection of the civilian population and for the restoration of services after the bombing, there was little, if any, evidence of panic. It is the purpose of this book to state the facts concerning the atomic bomb, and to make an objective, scientific analysis of these facts. It is hoped that as a result, although it may not be feasible completely to allay fear, it will at least be possible to avoid panic.’
‘The consequences of a multiweapon nuclear attack would certainly be grave ... Nevertheless, recovery should be possible if plans exist and are carried out to restore social order and to mitigate the economic disruption.’
‘Suppose the bomb dropped on Hiroshima had been 1,000 times as powerful ... It could not have killed 1,000 times as many people, but at most the entire population of Hiroshima ... [regarding the hype about various nuclear "overkill" exaggerations] there is enough water in the oceans to drown everyone ten times.’
In 1996, half a century after the nuclear detonations, data on cancers from the Hiroshima and Nagasaki survivors was published by D. A. Pierce et al. of the Radiation Effects Research Foundation, RERF (Radiation Research vol. 146 pp. 1-27; Science vol. 272, pp. 632-3) for 86,572 survivors, of whom 60% had received bomb doses of over 5 mSv (or 500 millirem in old units) suffering 4,741 cancers of which only 420 were due to radiation, consisting of 85 leukemias and 335 solid cancers.
‘Today we have a population of 2,383 [radium dial painter] cases for whom we have reliable body content measurements. . . . All 64 bone sarcoma [cancer] cases occurred in the 264 cases with more than 10 Gy [1,000 rads], while no sarcomas appeared in the 2,119 radium cases with less than 10 Gy.’
‘... it is important to note that, given the effects of a few seconds of irradiation at Hiroshima and Nagasaki in 1945, a threshold near 200 mSv may be expected for leukemia and some solid tumors. [Sources: UNSCEAR, Sources and Effects of Ionizing Radiation, New York, 1994; W. F. Heidenreich, et al., Radiat. Environ. Biophys., vol. 36 (1999), p. 205; and B. L. Cohen, Radiat. Res., vol. 149 (1998), p. 525.] For a protracted lifetime natural exposure, a threshold may be set at a level of several thousand millisieverts for malignancies, of 10 grays for radium-226 in bones, and probably about 1.5-2.0 Gy for lung cancer after x-ray and gamma irradiation. [Sources: G. Jaikrishan, et al., Radiation Research, vol. 152 (1999), p. S149 (for natural exposure); R. D. Evans, Health Physics, vol. 27 (1974), p. 497 (for radium-226); H. H. Rossi and M. Zaider, Radiat. Environ. Biophys., vol. 36 (1997), p. 85 (for radiogenic lung cancer).] The hormetic effects, such as a decreased cancer incidence at low doses and increased longevity, may be used as a guide for estimating practical thresholds and for setting standards. ...
‘Though about a hundred of the million daily spontaneous DNA damages per cell remain unrepaired or misrepaired, apoptosis, differentiation, necrosis, cell cycle regulation, intercellular interactions, and the immune system remove about 99% of the altered cells. [Source: R. D. Stewart, Radiation Research, vol. 152 (1999), p. 101.] ...
‘[Due to the Chernobyl nuclear accident in 1986] as of 1998 (according to UNSCEAR), a total of 1,791 thyroid cancers in children had been registered. About 93% of the youngsters have a prospect of full recovery. [Source: C. R. Moir and R. L. Telander, Seminars in Pediatric Surgery, vol. 3 (1994), p. 182.] ... The highest average thyroid doses in children (177 mGy) were accumulated in the Gomel region of Belarus. The highest incidence of thyroid cancer (17.9 cases per 100,000 children) occurred there in 1995, which means that the rate had increased by a factor of about 25 since 1987.
‘This rate increase was probably a result of improved screening [not radiation!]. Even then, the incidence rate for occult thyroid cancers was still a thousand times lower than it was for occult thyroid cancers in nonexposed populations (in the US, for example, the rate is 13,000 per 100,000 persons, and in Finland it is 35,600 per 100,000 persons). Thus, given the prospect of improved diagnostics, there is an enormous potential for detecting yet more [fictitious] "excess" thyroid cancers. In a study in the US that was performed during the period of active screening in 1974-79, it was determined that the incidence rate of malignant and other thyroid nodules was greater by 21-fold than it had been in the pre-1974 period. [Source: Z. Jaworowski, 21st Century Science and Technology, vol. 11 (1998), issue 1, p. 14.]’
‘Professor Edward Lewis used data from four independent populations exposed to radiation to demonstrate that the incidence of leukemia was linearly related to the accumulated dose of radiation. ... Outspoken scientists, including Linus Pauling, used Lewis’s risk estimate to inform the public about the danger of nuclear fallout by estimating the number of leukemia deaths that would be caused by the test detonations. In May of 1957 Lewis’s analysis of the radiation-induced human leukemia data was published as a lead article in Science magazine. In June he presented it before the Joint Committee on Atomic Energy of the US Congress.’ – Abstract of thesis by Jennifer Caron, Edward Lewis and Radioactive Fallout: the Impact of Caltech Biologists Over Nuclear Weapons Testing in the 1950s and 60s, Caltech, January 2003.
Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), discrediting the pseudo-science from geneticist Edward Lewis on pages 61, and 78-79:
‘... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls. …
‘What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10-6 like the others in the table. But the value of 2 x 10-6 was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year. ...
‘All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis’s work as others have done, “It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation” [Leading article in Science, vol. 125, p. 963, 1957]. This is just wishful journalese.
‘The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different.’
‘From the earlier studies of radiation-induced mutations, made with fruitflies [by Nobel Laureate Hermann J. Muller and other geneticists who worked on plants, who falsely hyped their insect and plant data as valid for mammals like humans during the June 1957 U.S. Congressional Hearings on fallout effects], it appeared that the number (or frequency) of mutations in a given population ... is proportional to the total dose ... More recent experiments with mice, however, have shown that these conclusions need to be revised, at least for mammals. [Mammals are biologically closer to humans, in respect to DNA repair mechanisms, than short-lived insects whose life cycles are too small to have forced the evolutionary development of advanced DNA repair mechanisms, unlike mammals that need to survive for decades before reproducing.] When exposed to X-rays or gamma rays, the mutation frequency in these animals has been found to be dependent on the exposure (or dose) rate ...
‘At an exposure rate of 0.009 roentgen per minute [0.54 R/hour], the total mutation frequency in female mice is indistinguishable from the spontaneous frequency. [Emphasis added.] There thus seems to be an exposure-rate threshold below which radiation-induced mutations are absent ... with adult female mice ... a delay of at least seven weeks between exposure to a substantial dose of radiation, either neutrons or gamma rays, and conception causes the mutation frequency in the offspring to drop almost to zero. ... recovery in the female members of the population would bring about a substantial reduction in the 'load' of mutations in subsequent generations.’
George Bernard Shaw cynically explains groupthink brainwashing bias:
‘We cannot help it because we are so constituted that we always believe finally what we wish to believe. The moment we want to believe something, we suddenly see all the arguments for it and become blind to the arguments against it. The moment we want to disbelieve anything we have previously believed, we suddenly discover not only that there is a mass of evidence against, but that this evidence was staring us in the face all the time.’
From the essay titled ‘What is Science?’ by Professor Richard P. Feynman, presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City, and published in The Physics Teacher, vol. 7, issue 6, 1968, pp. 313-20:
‘... great religions are dissipated by following form without remembering the direct content of the teaching of the great leaders. In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers.
‘We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on, but these do not thereby become established science, established knowledge. They are merely an imitative form of science analogous to the South Sea Islanders’ airfields - radio towers, etc., made out of wood. The islanders expect a great airplane to arrive. They even build wooden airplanes of the same shape as they see in the foreigners' airfields around them, but strangely enough, their wood planes do not fly. The result of this pseudoscientific imitation is to produce experts, which many of you are. ... you teachers, who are really teaching children at the bottom of the heap, can maybe doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.’
Richard P. Feynman, ‘This Unscientific Age’, in The Meaning of It All, Penguin Books, London, 1998, pages 106-9:
‘Now, I say if a man is absolutely honest and wants to protect the populace from the effects of radioactivity, which is what our scientific friends often say they are trying to do, then he should work on the biggest number, not on the smallest number, and he should try to point out that the [natural cosmic] radioactivity which is absorbed by living in the city of Denver is so much more serious [than the smaller doses from nuclear explosions] ... that all the people of Denver ought to move to lower altitudes.'
Feynman is not making a point about low level radiation effects, but about the politics of ignoring the massive natural background radiation dose, while provoking hysteria over much smaller measured fallout pollution radiation doses. Why is the anti-nuclear lobby so concerned about banning nuclear energy - which is not possible even in principle since most of our nuclear radiation is from the sun and from supernova debris contaminating the Earth from the explosion that created the solar system circa 4,540 million years ago - when they could cause much bigger radiation dose reductions to the population by concentrating on the bigger radiation source, natural background radiation. It is possible to shield natural background radiation by the air, e.g. by moving the population of high altitude cities to lower altitudes where there is more air between the people and outer space, or banning the use of high-altitude jet aircraft. The anti-nuclear lobby, as Feynman stated back in the 1960s, didn't crusade to reduce the bigger dose from background radiation. Instead they chose to argue against the much smaller doses from fallout pollution. Feynman's argument is still today falsely interpreted as a political statement, when it is actually exposing pseudo-science and countering political propaganda. It is still ignored by the media. It has been pointed out by Senator Hickenlooper on page 1060 of the May-June 1957 U.S. Congressional Hearings before the Special Subcommittee on Radiation of the Joint Committee on Atomic Energy, The Nature of Radioactive Fallout and Its Effects on Man:
‘I presume all of us would earnestly hope that we never had to test atomic weapons ... but by the same token I presume that we want to save thousands of lives in this country every year and we could just abolish the manufacture of [road accident causing] automobiles ...’
Dihydrogen monoxide is a potentially very dangerous chemical containing hydrogen and oxygen which has caused numerous severe burns by scalding and deaths by drowning, contributes to the greenhouse effect, accelerates corrosion and rusting of many metals, and contributes to the erosion of our natural landscape: 'Dihydrogen monoxide (DHMO) is colorless, odorless, tasteless, and kills uncounted thousands of people every year. Most of these deaths are caused by accidental inhalation of DHMO, but the dangers of dihydrogen monoxide do not end there. Prolonged exposure to its solid form causes severe tissue damage. Symptoms of DHMO ingestion can include excessive sweating and urination, and possibly a bloated feeling, nausea, vomiting and body electrolyte imbalance. For those who have become dependent, DHMO withdrawal means certain death.'
Protein P53, discovered only in 1979, is encoded by gene TP53, which occurs on human chromosome 17. P53 also occurs in other mammals including mice, rats and dogs. P53 is one of the proteins which continually repairs breaks in DNA, which easily breaks at body temperature: the DNA in each cell of the human body suffers at least two single strand breaks every second, and one double strand (i.e. complete double helix) DNA break occurs at least once every 2 hours (5% of radiation-induced DNA breaks are double strand breaks, while 0.007% of spontaneous DNA breaks at body temperature are double strand breaks)! Cancer occurs when several breaks in DNA happen to occur by chance at nearly the same time, giving several loose strand ends at once, which repair proteins like P53 then repair incorrectly, causing a mutation which can be proliferated somatically. This cannot occur when only one break occurs, because only two loose ends are produced, and P53 will reattach them correctly. But if low-LET ionising radiation levels are increased to a certain extent, causing more single strand breaks, P53 works faster and is able deal with faster breaks as they occur, so that multiple broken strand ends do not arise. This prevents DNA strands being repaired incorrectly, and prevents cancer - a result of mutation caused by faults in DNA - from arising. Too much radiation of course overloads the P53 repair mechanism, and then it cannot repair breaks as they occur, so multiple breaks begin to appear and loose ends of DNA are wrongly connected by P53, causing an increased cancer risk.
1. DNA-damaging free radicals are equivalent to a source of sparks which is always present naturally.
2. Cancer is equivalent the fire you get if the sparks are allowed to ignite the gasoline, i.e. if the free radicals are allowed to damage DNA without the damage being repaired.
3. Protein P53 is equivalent to a fire suppression system which is constantly damping out the sparks, or repairing the damaged DNA so that cancer doesn’t occur.
In this way of thinking, the ‘cause’ of cancer will be down to a failure of a DNA repairing enzyme like protein P53 to repair the damage.
'For the mindset that engendered and enables this situation, which jeopardizes the existence of the United States as a nation as well as the lives of millions of its citizens, some American physicians and certain prestigious medical organizations bear a heavy responsibility.
Charles J. Hitch and Roland B. McKean of the RAND Corporation in their 1960 book The Economics of Defense in the Nuclear Age, Harvard University Press, Massachusetts, pp. 310-57:
‘With each side possessing only a small striking force, a small amount of cheating would give one side dominance over the other, and the incentive to cheat and prepare a preventative attack would be strong ... With each side possessing, say, several thousand missiles, a vast amount of cheating would be necessary to give one side the ability to wipe out the other’s striking capability. ... the more extensive a disarmament agreement is, the smaller the force that a violator would have to hide in order to achieve complete domination. Most obviously, “the abolition of the weapons necessary in a general or ‘unlimited’ war” would offer the most insuperable obstacles to an inspection plan, since the violator could gain an overwhelming advantage from the concealment of even a few weapons.’
Disarmament after World War I caused the following problem which led to World War II (reported by Winston S. Churchill in the London Daily Express newspaper of November 1, 1934):
‘Germany is arming secretly, illegally and rapidly. A reign of terror exists in Germany to keep secret the feverish and terrible preparations they are making.’
British Prime Minister Thatcher's address to the United Nations General Assembly on disarmament on 23 June 1982, where she pointed out that in the years since the nuclear attacks on Hiroshima and Nagasaki, 10 million people had been killed by 140 non-nuclear conflicts:
‘The fundamental risk to peace is not the existence of weapons of particular types. It is the disposition on the part of some states to impose change on others by resorting to force against other nations ... Aggressors do not start wars because an adversary has built up his own strength. They start wars because they believe they can gain more by going to war than by remaining at peace.’
J. D. Culshaw, the then Director of the U.K. Home Office Scientific Advisory Branch, stated in his article in the Scientific Advisory Branch journal Fission Fragments, September 1972 (issue No. 19), classified 'Restricted':
'Apart from those who don't want to know or can't be bothered, there seem to be three major schools of thought about the nature of a possible Third World War ...
* 'The first group think of something like World War II but a little worse ...
* '... the second of World War II but very much worse ...
* 'and the third group think in terms of a catastrophe ...
'When the Armageddon concept is in favour, the suggestion that such problems exist leads to "way out" research on these phenomena, and it is sufficient to mention a new catastrophic threat [e.g., 10 years later this was done by Sagan with "nuclear winter" hype, which turned out to be fake because modern concrete cities can't produce firestorms like 1940s wooden-built areas of Hamburg, Dresden and Hiroshima] to stimulate research into the possibilities of it arising. The underlying appeal of this concept is that if one could show that the execution of all out nuclear, biological or chemical warfare would precipitate the end of the world, no one but a mad man would be prepared to initiate such a war. [However, as history proves, plenty of mad men end up gaining power and leading countries into wars.]'
J. K. S. Clayton, then Director of the U.K. Home Office Scientific Advisory Branch, stated in his introduction, entitled The Challenge - Why Home Defence?, to the 1977 Home Office Scientific Advisory Branch Training Manual for Scientific Advisers:
'Since 1945 we have had nine wars - in Korea, Malaysia and Vietnam, between China and India, China and Russia, India and Pakistan and between the Arabs and Israelis on three occasions. We have had confrontations between East and West over Berlin, Formosa and Cuba. There have been civil wars or rebellions in no less than eleven countries and invasions or threatened invasions of another five. Whilst it is not suggested that all these incidents could have resulted in major wars, they do indicate the aptitude of mankind to resort to a forceful solution of its problems, sometimes with success. ...'
It is estimated that Mongol invaders exterminated 35 million Chinese between 1311-40, without modern weapons. Communist Chinese killed 26.3 million dissenters between 1949 and May 1965, according to detailed data compiled by the Russians on 7 April 1969. The Soviet communist dictatorship killed 40 million dissenters, mainly owners of small farms, between 1917-59. Conventional (non-nuclear) air raids on Japan killed 600,000 during World War II. The single incendiary air raid on Tokyo on 10 March 1945 killed 140,000 people (more than the total for nuclear bombs on Hiroshima and Nagasaki combined) at much less than the $2 billion expense of the Hiroshima and Nagasaki nuclear bombs! Non-nuclear air raids on Germany during World War II killed 593,000 civilians. The argument that the enemy will continue stocking megaton fallout weapons if we go to cleaner weapons is irrelevant for deterrence, since we're not planning to start war, just to credibly deter invasions. You should not try to lower your standards of warfare to those of your enemy to appease groupthink taboos, or you will end up like Britain's leaders in the 1930s, trying to collaborate with fascists for popular applause.
Lord Hailsham of Saint Marylebone: ‘My Lords, if we are going into the question of lethality of weapons and seek thereby to isolate the nuclear as distinct from the so-called conventional range, is there not a danger that the public may think that Vimy, Passchendaele and Dresden were all right—sort of tea parties—and that nuclear war is something which in itself is unacceptable?’
Lord Trefgarne: ‘My Lords, the policy of making Europe, or the rest of the world, safe for conventional war is not one that I support.’
Mr. Bill Walker (Tayside, North): ‘I remind the House that more people died at Stalingrad than at Hiroshima or Nagasaki. Yet people talk about fighting a conventional war in Europe as if it were acceptable. One rarely sees demonstrations by the so-called peace movement against a conventional war in Europe, but it could be nothing but ghastly and horrendous. The casualties would certainly exceed those at Stalingrad, and that cannot be acceptable to anyone who wants peace’
On 29 October 1982, Thatcher stated of the Berlin Wall: ‘In every decade since the war the Soviet leaders have been reminded that their pitiless ideology only survives because it is maintained by force. But the day comes when the anger and frustration of the people is so great that force cannot contain it. Then the edifice cracks: the mortar crumbles ... one day, liberty will dawn on the other side of the wall.’
On 22 November 1990, she said: ‘Today, we have a Europe ... where the threat to our security from the overwhelming conventional forces of the Warsaw Pact has been removed; where the Berlin Wall has been torn down and the Cold War is at an end. These immense changes did not come about by chance. They have been achieved by strength and resolution in defence, and by a refusal ever to be intimidated.’
‘... peace cannot be guaranteed absolutely. Nobody can be certain, no matter what policies this or any other Government were to adopt, that the United Kingdom would never again be attacked. Also we cannot tell what form such an attack might take. Current strategic thinking suggests that if war were to break out it would start with a period of conventional hostilities of uncertain duration which might or might not escalate to nuclear conflict. ... while nuclear weapons exist there must always be a chance, however small, that they will be used against us [like gas bombs in World War II]. ... as a consequence of war between other nations in which we were not involved fall out from nuclear explosions could fall on a neutral Britain. ... conventional war is not the soft option that is sometimes suggested. It is also too easily forgotten that in World War II some 50 million people died and that conventional weapons have gone on killing people ever since 1945 without respite.’ - - The Minister of State, Scottish Office (Lord Gray of Contin), House of Lords debate on Civil Defence (General Local Authority Functions) Regulations, Hansard, vol. 444, cc. 523-49, 1 November 1983.
‘All of us are living in the light and warmth of a huge hydrogen bomb, 860,000 miles across and 93 million miles away, which is in a state of continuous explosion.’ - Dr Isaac Asimov.
‘Dr Edward Teller remarked recently that the origin of the earth was somewhat like the explosion of the atomic bomb...’ – Dr Harold C. Urey, The Planets: Their Origin and Development, Yale University Press, New Haven, 1952, p. ix.
‘But compared with a supernova a hydrogen bomb is the merest trifle. For a supernova is equal in violence to about a million million million million hydrogen bombs all going off at the same time.’ – Sir Fred Hoyle (1915-2001), The Nature of the Universe, Pelican Books, London, 1963, p. 75.
‘In fact, physicists find plenty of interesting and novel physics in the environment of a nuclear explosion. Some of the physical phenomena are valuable objects of research, and promise to provide further understanding of nature.’ – Dr Harold L. Brode, The RAND Corporation, ‘Review of Nuclear Weapons Effects,’ Annual Review of Nuclear Science, Volume 18, 1968, pp. 153-202.
‘It seems that similarities do exist between the processes of formation of single particles from nuclear explosions and formation of the solar system from the debris of a [4 x 1028 megatons of TNT equivalent, type Ia] supernova explosion. We may be able to learn much more about the origin of the earth, by further investigating the process of radioactive fallout from the nuclear weapons tests.’ – Dr Paul K. Kuroda (1917-2001), University of Arkansas, ‘Radioactive Fallout in Astronomical Settings: Plutonium-244 in the Early Environment of the Solar System,’ pages 83-96 of Radionuclides in the Environment: A Symposium Sponsored By the Division of Nuclear Chemistry and Technology At the 155th Meeting of the American Chemical Society, San Francisco, California, April 1-3, 1968, edited by Symposium Chairman Dr Edward C. Freiling (1922-2000) of the U.S. Naval Radiological Defense Laboratory, Advances in Chemistry Series No. 93, American Chemical Society, Washington, D.C., 1970.
Dr Paul K. Kuroda (1917-2001) in 1956 correctly predicted the existence of water-moderated natural nuclear reactors in flooded uranium ore seams, which were discovered in 1972 by French physicist Francis Perrin in three ore deposits at Oklo in Gabon, where sixteen sites operated as natural nuclear reactors with self-sustaining nuclear fission 2,000 million years ago, each lasting several hundred thousand years, averaging 100 kW. The radioactive waste they generated remained in situ for a period of 2,000,000,000 years without escaping. They were discovered during investigations into why the U-235 content of the uranium in the ore was only 0.7171% instead of the normal 0.7202%. Some of the ore, in the middle of the natural reactors, had a U-235 isotopic abundance of just 0.440%. Kuroda's brilliant paper is entitled, 'On the Nuclear Physical Stability of the Uranium Minerals', published in the Journal of Chemical Physics, vol. 25 (1956), pp. 781–782 and 1295–1296.
A type Ia supernova explosion, always yielding 4 x 1028 megatons of TNT equivalent, results from the critical mass effect of the collapse of a white dwarf as soon as its mass exceeds 1.4 solar masses due to matter falling in from a companion star. The degenerate electron gas in the white dwarf is then no longer able to support the pressure from the weight of gas, which collapses, thereby releasing enough gravitational potential energy as heat and pressure to cause the fusion of carbon and oxygen into heavy elements, creating massive amounts of radioactive nuclides, particularly intensely radioactive nickel-56, but half of all other nuclides (including uranium and heavier) are also produced by the 'R' (rapid) process of successive neutron captures by fusion products in supernovae explosions. Type Ia supernovae occur typically every 400 years in the Milky Way galaxy. On 4 July 1054, Chinese astronomers observed in the sky (without optical instruments) the bright supernova in the constellation Taurus which today is still visible as the Crab Nebula through telescopes. The Crab Nebula debris has a diameter now of 7 light years and is still expanding at 800 miles/second. The supernova debris shock wave triggers star formation when it encounters hydrogen gas in space by compressing it and seeding it with debris; bright stars are observed in the Orion Halo, the 300 light year diameter remains of a supernova. It is estimated that when the solar system was forming 4,540 million years ago, a supernova occurred around 100 light years away, and the heavy radioactive debris shock wave expanded at 1,000 miles/second. Most of the heavy elements including iron, silicon and calcium in the Earth and people are the stable end products of originally radioactive decay chains from the space burst fallout of a 7 x 1026 megatons thermonuclear explosion, created by fusion and successive neutron captures after the implosion of a white dwarf; a supernova explosion.
How would a 1055 megaton hydrogen bomb explosion differ from the big bang? Ignorant answers biased in favour of curved spacetime (ignoring quantum gravity!) abound, such as claims that explosions can’t take place in ‘outer space’ (disagreeing with the facts from nuclear space bursts by Russia and America in 1962, not to mention natural supernova explosions in space!) and that explosions produce sound waves in air by definition! There are indeed major differences in the nuclear reactions between the big bang and a nuclear bomb. But it is helpful to notice the solid physical fact that implosion systems suggest the mechanism of gravitation: in implosion, TNT is well-known to produce an inward force on a bomb core, but Newton's 3rd law says there is an equal and opposite reaction force outward. In fact, you can’t have a radially outward force without an inward reaction force! It’s the rocket principle. The rocket accelerates (with force F = ma) forward by virtue of the recoil from accelerating the exhaust gas (with force F = -ma) in the opposite direction! Nothing massive accelerates without an equal and opposite reaction force. Applying this fact to the measured 6 x 10-10 ms-2 ~ Hc cosmological acceleration of matter radially outward from observers in the universe which was predicted accurately in 1996 and later observationally discovered in 1999 (by Perlmutter, et al.), we find an outward force F = ma and inward reaction force by the 3rd law. The inward force allows quantitative predictions, and is mediated by gravitons, predicting gravitation in a checkable way (unlike string theory, which is just a landscape of 10500 different perturbative theories and so can’t make any falsifiable predictions about gravity). So it seems as if nuclear explosions do indeed provide helpful analogies to natural features of the world, and the mainstream lambda-CDM model of cosmology - with its force-fitted unobserved ad hoc speculative ‘dark energy’ - ignores and sweeps under the rug major quantum gravity effects which increase the physical understanding of particle physics, particularly force unification and the relation of gravitation to the existing electroweak SU(2) x U(1) section of the Standard Model of fundamental forces.
Even Einstein grasped the possibility that general relativity's lambda-CDM model is at best just a classical approximation to quantum field theory, at the end of his life when he wrote to Besso in 1954:
‘I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included ...’
‘Science is the organized skepticism in the reliability of expert opinion.’ - Professor Richard P. Feynman (quoted by Professor Lee Smolin, The Trouble with Physics, Houghton-Mifflin, New York, 2006, p. 307).
‘The expression of dissenting views may not seem like much of a threat to a powerful organization, yet sometimes it triggers an amazingly hostile response. The reason is that a single dissenter can puncture an illusion of unanimity. ... Among those suppressed have been the engineers who tried to point out problems with the Challenger space shuttle that caused it to blow up. More fundamentally, suppression is a denial of the open dialogue and debate that are the foundation of a free society. Even worse than the silencing of dissidents is the chilling effect such practices have on others. For every individual who speaks out, numerous others decide to play it safe and keep quiet. More serious than external censorship is the problem of self-censorship.’
— Professor Brian Martin, University of Wollongong, 'Stamping Out Dissent', Newsweek, 26 April 1993, pp. 49-50
In 1896, Sir James Mackenzie-Davidson asked Wilhelm Röntgen, who discovered X-rays in 1895: ‘What did you think?’ Röntgen replied: ‘I did not think, I investigated.’ The reason? Cathode ray expert J. J. Thomson in 1894 saw glass fluorescence far from a tube, but due to prejudice (expert opinion) he avoided investigating that X-ray evidence! ‘Science is the organized skepticism in the reliability of expert opinion.’ - Richard Feynman, in Lee Smolin, The Trouble with Physics, Houghton-Mifflin, 2006, p. 307.
Mathematical symbols in this blog: your computer’s browser needs access to standard character symbol sets to display Greek symbols for mathematical physics. If you don’t have the symbol character sets installed, the density symbol 'r' (Rho) will appear as 'r' and the 'p' (Pi) symbol will as 'p', causing confusion with the use of 'r' for radius and 'p' for momentum in formulae. This problem exists with Mozilla Firefox 3, but not with Microsoft Explorer which displays Greek symbols.
Mean yield of the 5,192 nuclear warheads and bombs in the deployed Russian nuclear stockpile as of January 2009: 0.317 Mt. Total yield: 1,646 Mt.
Mean yield of the 4,552 nuclear warheads and bombs in the deployed U.S. nuclear stockpile as of January 2007: 0.257 Mt. Total yield: 1,172 Mt.
For diffraction damage where damage areas scale as the two-thirds power of explosive yield, this stockpile's area damage potential can be compared to the 20,000,000 conventional bombs of 100 kg size (2 megatons of TNT equivalent total energy) dropped on Germany during World War II: (Total nuclear bomb blast diffraction damaged ground area)/(Total conventional blast diffraction damaged ground area to Germany during World War II) = [4,552*(0.257 Mt)2/3]/[20,000,000*(0.0000001 Mt)2/3] = 1,840/431 = 4.3. Thus, although the entire U.S. stockpile has a TNT energy equivalent to 586 times that of the 2 megatons of conventional bombs dropped on Germany in World War II, it is only capable of causing 4.3 times as much diffraction type damage area, because any given amount of explosive energy is far more efficient when distributed over many small explosions than in a single large explosion! Large explosions are inefficient because they cause unintended collateral damage, wasting energy off the target area and injuring or damaging unintended targets!
In a controlled sample of 36,500 survivors, 89 people got leukemia over a 40 year period, above the number in the unexposed control group. (Data: Radiation Research, volume 146, 1996, pages 1-27.) Over 40 years, in 36,500 survivors monitored, there were 176 leukemia deaths which is 89 more than the control (unexposed) group got naturally. There were 4,687 other cancer deaths, but that was merely 339 above the number in the control (unexposed) group, so this is statistically a much smaller rise than the leukemia result. Natural leukemia rates, which are very low in any case, were increased by 51% in the irradiated survivors, but other cancers were merely increased by just 7%. Adding all the cancers together, the total was 4,863 cancers (virtually all natural cancer, nothing whatsoever to do with radiation), which is just 428 more than the unexposed control group. Hence, the total increase over the natural cancer rate due to bomb exposure was only 9%, spread over a period of 40 years. There was no increase whatsoever in genetic malformations.
‘If defense is neglected these weapons of attack become effective. They become available and desirable in the eyes of an imperialist dictator, even if his means are limited. Weapons of mass destruction could become equalizers between nations big and small, highly developed and primitive, if defense is neglected. If defense is developed and if it is made available for general prevention of war, weapons of aggression will become less desirable. Thus defense makes war itself less probable. ... One psychological defense mechanism against danger is to forget about it. This attitude is as common as it is disastrous. It may turn a limited danger into a fatal difficulty.’
Advice of Robert Watson-Watt (Chief Scientist on the World War II British Radar Project, defending Britain against enemy attacks): ‘Give them the third best to go on with, the second best comes too late, the best never comes.’
All of this data should have been published to inform public debate on the basis for credible nuclear deterrence of war and civil defense, PREVENTING MILLIONS OF DEATHS SINCE WWII, instead of dDELIBERATELY allowing enemy anti-nuclear and anti-civil defence lying propaganda from Russian supporting evil fascists to fill the public data vacuum, killing millions by allowing civil defence and war deterrence to be dismissed by ignorant "politicians" in the West, so that wars triggered by invasions with mass civilian casualties continue today for no purpose other than to promote terrorist agendas of hate and evil arrogance and lying for war, falsely labelled "arms control and disarmament for peace": "Controlling escalation is really an exercise in deterrence, which means providing effective disincentives to unwanted enemy actions. Contrary to widely endorsed opinion, the use or threat of nuclear weapons in tactical operations seems at least as likely to check [as Hiroshima and Nagasaki] as to promote the expansion of hostilities [providing we're not in a situation of Russian biased arms control and disarmament whereby we've no tactical weapons while the enemy has over 2000 neutron bombs thanks to "peace" propaganda from Russian thugs]." - Bernard Brodie, pvi of Escalation and the nuclear option, RAND Corp memo RM-5444-PR, June 1965.
Update (19 January 2024): Jane Corbin of BBC TV is continuing to publish ill-informed nuclear weapons capabilities nonsense debunked here since 2006 (a summary of some key evidence is linked here), e.g. her 9pm 18 Jan 2024 CND biased propaganda showpiece Nuclear Armageddon: How Close Are We? https://www.bbc.co.uk/iplayer/episode/m001vgq5/nuclear-armageddon-how-close-are-we which claims - from the standpoint of 1980s Greenham Common anti-American CND propaganda - that the world would be safer without nuclear weapons, despite the 1914-18 and 1939-45 trifles that she doesn't even bother to mention, which were only ended with nuclear deterrence. Moreover, she doesn't mention the BBC's Feb 1927 WMD exaggerating broadcast by Noel-Baker which used the false claim that there is no defence against mass destruction by gas bombs to argue for UK disarmament, something that later won him a Nobel Peace Prize and helped ensure the UK had no deterrent against the Nazis until too late to set off WWII (Nobel peace prizes were also awarded to others for lying, too, for instance Norman Angell whose pre-WWI book The Great Illusion helped ensure Britain's 1914 Liberal party Cabinet procrastinated on deciding what to do if Belgium was invaded, and thus failed deter the Kaiser from triggering the First World War!). The whole basis of her show was to edit out any realism whatsoever regarding the topic which is the title of her programme! No surprise there, then. Los Alamos, Livermore and Sandia are currently designing the W93 nuclear warhead for SLBM's to replace the older W76 and W88, and what she should do next time is to address the key issue of what that design should be to deter dictators without risking escalation via collateral damage: "To enhance the flexibility and responsiveness of our nuclear forces as directed in the 2018 NPR, we will pursue two supplemental capabilities to existing U.S. nuclear forces: a low-yield SLBM warhead (W76-2) capability and a modern nuclear sea launched cruise missile (SLCM-N) to address regional deterrence challenges that have resulted from increasing Russian and Chinese nuclear capabilities. These supplemental capabilities are necessary to correct any misperception an adversary can escalate their way to victory, and ensure our ability to provide a strategic deterrent. Russia’s increased reliance on non-treaty accountable strategic and theater nuclear weapons and evolving doctrine of limited first-use in a regional conflict, give evidence of the increased possibility of Russia’s employment of nuclear weapons. ... The NNSA took efforts in 2019 to address a gap identified in the 2018 NPR by converting a small number of W76-1s into the W76-2 low-yield variant. ... In 2019, our weapon modernization programs saw a setback when reliability issues emerged with commercial off-the-shelf non-nuclear components intended for the W88 Alteration 370 program and the B61-12 LEP. ... Finally, another just-in-time program is the W80-4 LEP, which remains in synchronized development with the LRSO delivery system. ... The Nuclear Weapons Council has established a requirement for the W93 ... If deterrence fails, our combat-ready force is prepared now to deliver a decisive response anywhere on the globe ..." - Testimony of Commander Charles Richard, US Strategic Command, to the Senate Committee on Armed Services, 13 Feb 2020. This issue of how to use nuclear weapons safely to deter major provocations that escalate to horrific wars is surely is the key issue humanity should be concerned with, not the CND time-machine of returning to a non-nuclear 1914 or 1939! Corbin doesn't address it; she uses debunked old propaganda tactics to avoid the real issues and the key facts.
For example, Corbin quotes only half a sentence by Kennedy in his TV speech of 22 October 1962: "it shall be the policy of this nation to regard any nuclear missile launched from Cuba against any nation in the Western hemisphere as an attack by the Soviet Union on the United States", and omits the second half of the sentence, which concludes: "requiring a full retalitory response upon the Soviet Union." Kennedy was clearly using US nuclear superiority in 1962 to deter Khrushchev from allowing the Castro regime to start any nuclear war with America! By chopping up Kennedy's sentence, Corbin juggles the true facts of history to meet the CND agenda of "disarm or be annihilated." Another trick is her decision to uncritically interview CND biased anti-civil defense fanatics like the man (Professor Freedman) who got Bill Massey of the Sunday Express to water down my article debunking pro-war CND type "anti-nuclear" propaganda lies on civil defense in 1995! Massey reported to me that Freedman claimed civil defense is no use against a H-bomb, which he claims is cheaper than dirt cheap shelters, exactly what Freedman wrote in his deceptive letter published in the 26 March 1980 Times newspaper: "for far less expenditure the enemy could make a mockery of all this by increasing the number of attacking weapons", which completely ignores the Russian dual-use concept of simply adding blast doors to metro tubes and underground car parks, etc. In any case, civil defense makes deterrence credible as even the most hard left wingers like Duncan Campbell acknowledged on page 5 of War Plan UK (Paladin Books, London, 1983): "Civil defence ... is a means, if need be, of putting that deterrence policy, for those who believe in it, into practical effect."
33 Comments:
http://en.wikipedia.org/wiki/Talk:Effects_of_nuclear_explosions
Dr William G. Penney used "kT" in his article on the nuclear explosive yields at Hiroshima and Nagasaki, Proc. Roy. Soc. London, 1970. Penney's paper is cited in Glasstone & Dolan (ENW 1977), although they only use it for the source of the yields of Hiroshima and Nagasaki. Penney had issues with the 1962/4 edition of Glasstone, and these are ignored. The British manual "Nuclear Weapons" (H.M. Stationery Office, 1974) uses "KT", but most sources use "kt".
Incidentally, Penney reproduces British nuclear test data and disputes the blast wave height-of-burst curves. Penney found that the 'peaking' effect in the Mach region for air bursts is due to the heating of the air just above the ground by the heat flash, and almost disappears if you measure the blast with sensors on poles 3 m high. Penney also discredits Glasstone's dismissal of blast damage in reducing the blast pressure. Accurate data on the crushing of empty petrol cans at Hiroshima by the blast showed that the overpressure decreased due to damage done to wooden houses. (You can't cause mass destruction without using up a lot of energy, which causes an irreversible loss of blast pressure with distance.) In a megaton detonation over a brick or concrete built city the loss of energy would reduce pressure ranges dramatically as the blast diverges outwards. All the American data comes from tests in unobstructed deserts or Pacific atolls.
I discussed this by email with Dr Hal Brode, who did the original RAND Corp computer calculations of blast waves. His first response was the standard idea that the blast doesn't necessarily lose energy by doing work (causing destruction), since the debris will pick up some of the energy and carry it outward as flying bricks, panels and glass. However it is clear that the blast loses energy by the work done in breaking walls, which is irreversibly lost in warming up the rubble. If each house destroyed takes 1 % of the blast energy, then the energy after destroying 200 houses on a radial line outward from the explosion is down to just 100(0.99^200) = 13 % of what it would be over desert. This is valid for wood-frame houses. Brick and concrete buildings absorb far more energy per building destroyed, so in a modern city the blast pressure would fall very rapidly indeed. This is non-scalable, so it is most pronounced at high yields with large destruction radii computed for open terrain. Brode did concede, when presented with Penney's data, that this effect is not taken into account in American blast calculations at present. See http://glasstone.blogspot.com for further data. - Nigel Cook (edit by User:217.137.87.10)
The blast energy which diffracts back in is the incident blast energy minus the energy lost in causing destruction. The blast wave is always diverging, which is one of the reasons for the fall in overpressure with distance. Any sideways (non-radial) flow of energy to fill in areas where houses have been destroyed, reduces the energy somewhere else. You can't get something for nothing. If you have read the declassified book which is 1317 pages long, "Capabilities of Nuclear Weapons" by Philip J. Dolan of SRI, report DNA-EM-1 (Defence Nuclear Agency's Effects Manual number 1), you will see that this applies to forests. The blast diffracts around the tree trunks and fills in again afterwards. This was observed in forest stands at various tests, where the blast overpressure was measured on each side and found to be similar.
The blast wave cannot cause destruction without using energy, and this use of energy depletes the blast wave. The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’
A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after an average of 200 houses in any radial line from ground zero outwards are destroyed, 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am. The heat flash can’t set wood alight directly, as proved in Nevada tests: it just scorches wood unless it is painted white. You need to have intermediaries like paper litter and trash in a line-of-sight from the fireball before you can get direct ignition, as proved by the clarity of ‘shadowing’ remaining afterwards (such as scorch protection of tarmac and dark paint by people who were flash burned). In general, each building will absorb a constant amount of energy from the blast wave (ranging from about 1 % for wood frame houses to about 5 % for brick or masonry buildings) despite varying overpressure, because more work is done on the building in causing destruction at higher pressures. At low pressures, the building just vibrates slightly. So the percentage of the blast energy incident on the building which is absorbed irreversibly in heating up the building is approximately constant, regardless of peak pressure. Hence, the energy loss in a city of uniform housing density is exponential with distance, and does not scale with weapon yield. Therefore, the reduction in damage distances is most pronounced at high yields.
-Nigel Cook 26 Dec 05
The easiest way to deal with it is by energy use by the blast. The work energy used in pushing a wall distance x with force F is E = xF. Blast waves do diffract, but this doesn't violate conservation of energy. The problem with Glasstone and Dolan 1957-77 is that the book tries to dismiss the differences between a concrete city and a desert, without evidence. It is a cut down version of DNA-EM-1 which does contain sources. When you recognise that it was only in 1986 that they realised that gravity limits crater sizes [1] in the megaton range to 1/4 power scaling (instead of 0.3 power scaling), you get an idea of the bureaucracy of the U.S. Government nuclear effects calculation business. The secrecy prevents a wide range of critical assessment, so fundamental new ideas are ignored, and errors can persist for decades. 172.189.174.108 14:49, 13 January 2006 (UTC)
"The energy loss per square metre of diverging blast front is small for each building, 1% loss for destroying a wood frame house. So the blast reduction is only important for cities, not for isolated buildings on a desert.The American manuals neglect the fact that energy used is lost from the blast. Visiting Hiroshima and Nagasaki, Penney recorded accurate measurements of damage effects on large objects that had been simply crushed or bent by the blast overpressure or by the blast wind pressure, respectively. At Hiroshima, a collapsed oil drum at 198 m and bent I-beams at 396 m from ground zero both implied a yield of 12-kt. But at 1,396 m data from the crushing of a blue print container indicated that the peak overpressure was down by 30%, due to damage caused, as compared to desert test data. At 1,737 m, damage to empty petrol cans showed a reduction in peak overpressure to 50%: ‘clear evidence that the blast was less that it would have been from an explosion over an open site.’
"A similar pattern emerged at Nagasaki, with close-in effects indicating a yield of 22-kt and a 50% reduction in peak overpressure at 1,951 m as shown by empty petrol can damage: ‘clear evidence of reduction of blast by the damage caused…’ If each house destroyed in a radial line uses 1 % of the blast energy, then after 200 houses are destroyed, the blast will be down to just 0.99^200 = 0.13 of what it was before, so 87 % of the blast energy will have been lost in addition to the normal fall in blast pressure due to divergence in an unobstructed desert or Pacific ocean test. You can’t ‘have your cake and eat it’: either you get vast blast areas affected with no damage, or you get the energy being used to cause damage over a relatively limited area. The major effects at Hiroshima in the horizontal blast (Mach wave) zone from the air bursts were fires set off when the blast overturned paper screens, bamboo furniture, and such like on to charcoal cooking braziers being used in thousands of wooden houses to cook breakfast at 8.01 am." - http://glasstone.blogspot.com172.201.72.197 13:34, 30 January 2006 (UTC)
Simpler discussion of the theoretical basis for the E^0.25 scaling law for crater dimensions at large yields:
The standard unclassified work on the effects of nuclear weapons is Glasstone and Dolan, U.S. Dept. of Defence, 1977. That book states that crater radii for nuclear tests of bombs burst on ground level in the same type of soil, say Nevada sand, are proportional to E^0.3 where E is the energy release in the explosion.
The 0.3 is an empirical factor, not based on theory. Unfortunately, it's wrong, as was discovered and published in a semi-secret paper in 1987 by the U.S. Department of Defense (the Glasstone and Dolan book was never published). It turns out that all the data used for the E^0.3 scaling law comes from Nevada tests of 1-100 kilotons, and has a fair amount of scatter.
Physical theory shows that for big yields, enormous amounts of soil are lofted from inside the crater up to the rim and ejecta on the surrounding terrain, and the energy required to lift the stuff is E = mgh, where m is mass, g is acceleration due to gravity and h is the average height the material is raised (about half the depth of the crater). The crater mass m equals the soil density times the crater size, which is proportional to the cube of the crater radius in surface bursts. Since the depth to radius ratio is approximately constant, the crater height h is proportional to radius, so the energy used in cratering, E = mgh = (aR^3)g(bR) = abgR^4, where a and b are constants. This tells you that the crater radius for big craters (where work done against gravity is the overriding use of energy in cratering), E is proportional to R^4, so crater radius R is proportional to E^(1/4) or E^0.25.
So it turns out that theory shows that at large yields, crater sizes are proportional to to E^0.25, not E^0.3.
The theory is predictive, because if you know the fraction of bomb energy absorbed in the ground, you can work predict the crater size accurately from the physical theory: you know how much energy is used to eject mass from the ground and that, together with the density of sand, the crater shape and the acceleration due to gravity, enable you to predict theoretically the crater size. (The fraction of energy used in cratering is deduced by the fact in a surface burst the effective blast energy yield of the bomb is found to be 1.6 times that of a free air burst of the same total energy release, rather than twice those of a free air burst as you'd expect if the ground was a perfect reflector with the pressures from the downward shock hemisphere being reflected up and merging to form a single powerful blast hemisphere in the air; the lost energy is that which digs the hole in the ground and causes ground shock. Ideally, you should also include an analysis of how much thermal energy is converted into cratering, by subtracting the thermal yield of a surface burst, typically 15-20%, from the thermal yield of a free air burst, 35-40%, and allowing for the proportion of the thermal energy used to melt soil into spherical fallout particles of fused silica or whatever. Dr Carl F. Miller calculated in his 1963 Stanford Research Institute report, “Fallout and Radiological Countermeasures” volume 1, that the portion of bomb energy used to fuse sand into glassy fallout spheres in a Nevada surface burst ranges from 7.5% for a 1 kt bomb to 9.2% for a 100 Mt bomb.) You can then check the theoretical predictions against the 1-100 kt Nevada craters from 1950s nuclear tests.
Just a small warning: some of the material and formulae in this post may contain errors, since it was taken from a draft journal manuscript and I don't know whether the units were consistently converted from pressure in psi to kPa and from feet to metres, calories/kt to J/kt or whatever.
Readers should check formulae for typing errors in any case, for instance by comparing to the blast pressure curves from nuclear weapon tests.
I will produce a revised blog post, or possibly a page uploaded to the domain http://quantumfieldtheory.org, to quantitatively analyse all nuclear effects data. (This blogger system is terrible to use for equations since you need to type the mark-ups for superscript and Greek symbols manually using html.)
In the meantime, two updates of vital historical importance:
(1) Quotation from:
Harold L. Brode and R. L. Bjork, "Cratering from a megaton surface burst", RAND Corporation, Santa Monica, California, report RM-2600, 1960:
"Calculations on the cratering and ground motion in a rock medium due to a two-megaton surface burst. The theoretical approach assumes a two-dimensional hydrodynamic model, and it is used to determine the motions involved in the cratering from a large-yield surface burst. Thetechnique is found to work well and to check with experimental observations. It is shown that the primary cause of cratering for such an explosion is not "airslap," as previously suggested, but rather the direct action of the energitic bomb vapors. High-yield surface bursts are therefore less effective in cratering by that portion of the energy that escapes as radiation in the earliest phases of the explosion. The cratering action and ground shock from large-yield explosions is of primary importance to problems of hardening military installations as well as to the peaceful use of nuclear explosions."
(2) Harold L. Brode's excellent 53 pages long paper, "Fireball Phenomenology" (RAND Corporation, paper P-3026, 1964) is now available to downoad freely from RAND Corporation as a 1.2 MB PDF document:
http://www.rand.org/pubs/papers/2006/P3026.pdf
Some of the charts from this report were included in Dr Brode's article, "Review of Nuclear Weapons Effects", published in the 1968 Annual Review of Nuclear Science, volume 18, pages 153-202.
However, this report includes more detail specifically on fireball scaling laws derived from detailed numerical simulations of fireballs at various altitudes and for yields of 1.7 kt to 4 Mt. It also provides extra charts and illustrations.
More detailed data on blast wave pressure decay rates and related details for free air bursts are available in the report
http://www.rand.org/pubs/research_memoranda/2005/RM1363.pdf
Hello Nige,
How did you get the 1-5% energy absorption figure for each house being destroyed. Did you use data provided by Penney or were the calculations made by you.
Your blog is pure quality, thank you for creating it.
Arvinder
Hello Arvinder,
Thanks, however this blog has many limitations and has been put together too quickly in odd spare moments. I'm going to try to build something much better when time permits, systematically going through all the effects of nuclear weapons, reviewing the details and compiling the best information. I've got a large amount of information beyond what is on this blog (which is mainly concerned with the more "controversial" - actually factually-proved-but-politically-inexpedient - aspects of the many problems).
The 1-5% figures is the range I computed from detailed analysis of the effects on houses, and which is substantiated by Penney's research.
For typical Japanese wood-frame houses, which were the predominant building type in Hiroshima and Nagasaki prior to the nuclear attacks, the fall in overpressure is about 1% per house on an radial line. Since the distribution of the houses is known from aerial photographs taken by the 509ths prior to the attacks, the data in Penney's report which gives the accurately measured blast overpressure at various distances from the distortion of overpressure-sensitive targets like petrol cans, blueprint containers, etc., can be compared to peak overpressure for ideal blast waves over unobstructed desert terrain or desert, from nuclear tests.
The percentage of the blast energy absorbed per house encountered on any radial line from the bomb is also computable using the structural displacement due to the blast wave. Glasstone and Dolan provide a simple way of analysing the net pressure acting on a building as the blast wave diffracts around it.
Basically, the overpressure only produces a net force on the building as a while during the time taken for the shock front to travel the length of the building. Since the shock front is moving at supersonic velocity, this "diffraction loading" force acts for typically 0.1 second for a building 75 feet long. After that time, the overpressure equalises on all sides, and the building is simply crushed rather than pushed over.
Another effect is the wind drag loading, which continues for the entire duration of the positive phase of the dynamic pressure. This is of course very important for long-duration blast waves, or when the air is filled with hot dust (giving a sandstorm effect) as occurs if there is a precursored blast wave.
By calculating the overall force loading and the response of a building to that loading, the energy absorbed by the building from the blast is easily computed.
The basic law is that the work energy E done by a force F in causing a motion along distance X in the direction of the force (i.e. the radial direction) is:
E = FX
Dr Harold Brode (formerly of RAND Corp., R&D associates, etc.) made an argument to me by email that the energy which is absorbed from the blast wave in the act of causing damage is not really lost because it just gets converted into the directed kinetic energy of debris from the building, and the debris proceeds to move downrange.
This argument of his is flawed in a major way, because the velocity of the debris is much less than that of the shock wave, and in any case the debris from a destroyed building gets decelerated as it bounces along the ground.
In addition, buildings are going to be shaken and thus absorb energy from the shock front even at pressures far lower than those which will destroy a building.
But one advantage of Dr Brode's comment is that you can look at it as a simple way to calculate the energy depletion: the kinetic energy which is gained by the debris of a house is the minimum amount of blast energy which is lost through the work done in destroying the house.
Obviously, when a house gets destroyed not all the energy lost goes into the debris. A lot is used to do mechanical work in bending and snapping beams, joints, bricks, cement, etc., which ends up getting degraded into thermal energy without anything gaining a significant outward velocity. But there are quite a lot of studies of how fast debris moves on average for given pressures of blast wave.
One very simple example is study of human dummies exposed to a blast wave. When the dummies are accelerated and thrown downrange by the blast wave, they deplete some energy from the blast wave, which is turned into the kinetic energy of the dummy:
‘We were fortunate enough at a 5 psi station in one of the 1957 shots in Nevada to photograph the time-displacement history of a 160-pound [standing] dummy, and we were able from analysis of the movies to determine the maximal velocity reached ... about 21 feet per second. This velocity developed in 0.5 second. The total displacement of the dummy was near 22 feet ... It was this piece of empirical information that helped greatly in getting an analytical “handle” on the “treatment” of man as missile.’
– Dr Clayton S. White, who worked on nuclear weapon blast effects at Nevada test series’ Upshot-Knothole (1953), Teapot (1955) and Plumbbob (1957), Testimony to the U.S. Congressional Hearings, 22-26 June 1959, Biological and Environmental Effects of Nuclear War, U.S. Government Printing Office, 1959, pp. 364-5.
In this example, a 72.5 kg dummy exposed to a blast wave with a peak overpressure of 5 psi was accelerated to a peak velocity of 6.4 m/s. The energy lost from the blast wave by this one human being was:
E = (1/2)mv^2 = 1500 Joules
lost from the blast wave.
Notice that the person (representative of a large missile) doesn't fly downwind at supersonic velocity, but thuds to the ground after a displacement of 6.7 metres. The kinetic energy then gets converted into mechanical energy in damaging the dummy, instead of getting converted back into blast energy. Similarly when the roof or wall of a building gets blasted off, it thuds to the ground some distance downrange, and the impact causes it to break up. The energy isn't magically returned to the blast wave.
When a lot of big buildings get smashed up by the blast, substantial amounts of energy are lost.
The calculations I did gave a range of 1% loss per wood-frame house along a radial line from the bomb, to 5% loss per brick or masonry building. The 1% wood-frame building figure is empirically justified by the data from Hiroshima and Nagasaki.
The result is that blast damage ranges in cities are far smaller than predicted from cube-root scaling based on unobstructed desert and ocean pressure data, particular for higher yield weapons where predicted damage distances are great (covering large residential areas).
I have a detailed study on this problem, with a break down of figures for different types of housing and also an analysis of how the energy loss varies as a function of incident overpressure (this varies for different types of buildings, but it's not a bad approximation to treat the percentage loss as a constant regardless of incident overpressure).
The person at fault here is Samuel Glasstone himself, it seems. He edited out several vital bits of the September 1950 "Effects of Atomic Weapons" (of which he was executive editor on an editorial board which had as its chairman Joseph O. Hirschfelder, David B. Parker, Arnold Kramish and Ralph Carlisle Smith) which stated on page 56 (in a section based on work done by John von Neumann and Frederick Reines of Los Alamos):
[Paragraph 3.20] "... As to the detailed description of the target, not only are the structures of odd shape, but they have the additional complicating property of not being rigid. This means that they do not merely deflect the shock wave, but they also absorb energy from it at each reflection.
[Paragraph 3.21] "The removal of energy from the blast in this manner decreases the shock pressure at any given distance from the point of detonation to a value somewhat below that which it would have in the absence of dissipative objects, such as buildings. The presence of such dissipation or diffraction makes it necessary to consider somewhat higher values of the pressure than would be required to produce a desired effect if there were only one structure set by itself on a rigid plane."
Glasstone apparently edited out that section from further versions of the book (such as the 1957 renamed "Effects of Nuclear Weapons") because it contradicted the oversimplified statement on page 137 of the 1950 "Effects of Atomic Weapons", which vaguely claimed that:
"The general experience in Japan provides support for the view ... that the effect of one building in shielding another from blast damage due to an atomic bomb would be small."
Yes, it's about 1% for Japan, but that's missing the whole point!
After the blast covers a radial line through 100 buildings, the cumulative 1% losses amount to a very big loss: (1 - 0.01)^100 = 0.366. Hence the peak overpressure is down by a factor of 2.7 after the blast wave has knocked down 100 wooden houses in a straight line.
By just comparing one house with its neighbour, of course you don't see any difference because the difference is only 1%.
Glasstone probably oversimplified it in later editions because he simply didn't think it through and realise that the effect of summing a lot of small % energy absorptions is cumulative and adds up to a substantial reduction in overpressure at great distances in a build up area.
By focussing on the tiny difference between one building and the next, nothing was observed because the 1% depletion was statistically undetectable in the somewhat chaotic damage effects.
It wasn't until Penney's analysis in 1970, two decades later, that evidence emerged that cumulative depletion of blast energy along a radial line from ground zero made substantial reductions in overpressure and damage at great distances, compared to those predicted from 1950s test data based on unobstructed terrain in deserts and over oceans.
Kind regards,
Nigel
Thank you for answering my question Nigel, much appreciated. Detailed enough for me. I look forward to more great analysis from you in the future.
Here is a great blog with daily news on the exploding world economy, which might interest you
http://theautomaticearth.blogspot.com/
and another great blog on energy
http://www.theoildrum.com/
Wishing you the best,
Arvinder
After re-reading this post on 31 May 2008, I want to emphasise that the net outward force effect from air blast is the DYNAMIC PRESSURE of the blast wave (which is a vector because it is directional - blowing radially with zero non-radial pressure) multiplied by the spherical surface area of the blast wave.
The normal overpressure is better called the "non-directional overpressure" or non-dynamic pressure. It is a pressure which acts in all directions (basically like a charge in air pressure).
What we are concerned with when calculating the net outward force of a blast wave is the wind or dynamic pressure, which blows in the radial direction.
Consider two 35mt bursts (planned warhead for Titan II) -1 air(to maximize 15 psi overpressure) and 1-ground in Moscow ,how much would be damage by blast and fire ?
Surely the Titan II warhead was the roughly 9 Mt bomb tested as 8.9 Mt Hardtack-Oak in 1958?
I don't see how you could have put an extremely heavy 35 Mt warhead on a Titan II missile without exceeding the payload? The missile would have had to be considerably larger to take a 20 tons or more massive warhead, and it was already the size of a small space rocket!
The effects of blast and heat - in the open the 50% lethal range at Hiroshima was 1.3 miles, compared to 0.12 mile in the ground floor of modern concrete buildings.
Scaling up this data to 35 megatons by the cube-root law (for diffraction damage and blast induced fires) gives (35,000/15)^{2/3} = 13-fold increase to 1.6 miles for 50% mortality in concrete buildings and 21 miles for people outdoors or in flimsy imflammable Hiroshima wooden houses full of bamboo furnishings, paper screens and easily-blast-overturned charcoal braziers which were cooking breakfast at 8:15am in Hiroshima.
The 21 mile range would probably be reduced substantially by the cumulative energy loss of the blast in destroying successive wooden houses, but the 1.6 miles figure for people in concrete buildings is more relevant for a 35 Mt air burst over modern city buildings. The 50% lethal range for a ground surface burst would be less than 1.6 miles.
This system terrible.My comment far exceed limited volume.So I devided my comment to several parts.
Part1.
Thank you.But,9-megaton yield for MK-53(not B-53 or W-53) was based on Hansen book,he assumed that Oak device was tested on full yield,but i think it actually tested at half yield.Space rockets were actually considerated as ICBMs.Hansen give false yields for Mk-21(4mt,but this warhead must have 14-15mt,because Mk-36 had 19 megatons,MK-21 tested in clean configuration at 4.5 mt,but that was only 1/3 at full yield,and clean version of Mk-36 had a yield
of 6mt(this version actually was build(converted ) and stockpiled in very small numbers,but never deployed).Given,that 4.5/6*19=14.25mt.Mk36 was improved version of Mk21,built for military requirements for 20Mt for cratering runways with 50% probability to produce 50%damage.
Sources for that data :
Document 2: "Report of NSC Ad Hoc Working Group on the Technical Feasibility of a Cessation of Nuclear Testing," 27 March 1958.
Hans Bethe chairman.That second declassification.It can be found at
national security archive,Washinghton University Library,archive-Nuclear Vault.Section-The Making of Limited Test Ban treaty.
http://www.gwu.edu/~nsarchive/NSAEBB/NSAEBB94/tb02.pdf.
and Letter from Captain John H. Morse, Special Assistant to the Chairman, Atomic Energy Commission, to Lewis Strauss, Chairman, Atomic Energy Commission, 14 February 1957, Secret.At same archive,but in section :It Is Certain There Will be Many Firestorms" (1)
New Evidence on the Origins of Overkill
National Security Archive Electronic Briefing Book No. 108.
Part2.
Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized
version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info:
"By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt.
For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue".
Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm.
There also some very nice documents.
60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D.
100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as
ICBMs.
Part2.
Hansen also give false data about MK-41.Mk-41 was not related to Poplar device,Mk-41 was weaponized
version of Bassoon prime ,tested in Redwing Tewa (its potential yield was 25 megatons,85% fission-UCRL-4725).It had not simple tamper around tertiary stage,but multi-layer tamper,to maximize capture neutrons and yield-to weight ratio.Mk-41 was only Class B weapon.Given weight 10500lbs -Y/W-ratio-5.3 kt/kg.Some background info:
"By early 1956 it was possible to fabricate TN weapons smaller than anything conceived two years earlier.AEC laboratories anticipated they could soon achieve a marked decrease in weight and marked increase in yield in four classes of TN weapons.For example ,AEC predicted that new class A weapon would be built that would weigh not 50,000 pounds ,as had its predessor ,but 25,000 pounds , and its yield wold be 60 Mt rather than the earlier 20Mt.
For those who had been starled by the destuctive power of the 20 kt bombs in 1945 and 1946,it must have been horrifying even to contemplate the possibility of a 60 Mt weapon.Yet in early 1957 AEC laboratories indicated that such a bomb might be devised in the not distant future.And in March 1958 the USAF Chief of Staff asked for a study of the feasibility of employing a weapons with a yields of 100 to 1000mt.The Air Staff concluded that it might be feasible but not desirable to use a 1,000-megaton weapon.Since lethal radioactivity might not be contained within confines of an enemy state and since it might be impractical even to test such a weapon ,the Air Force Council decided in April 1959 to postpone establishing a position on the issue".
Souce-"The Air Force and Strategic Detterence 1951-1960.USAF historical devision LIASON OFFICE by George F.Lemmer 1967".Formely restricted data.Declassified.Try find at http://alternatewars.com/WWIII/WWW3.htm.
There also some very nice documents.
60-megaton weapon -highest yield weapon ,that could be carried by aircraft.-for example B-70 stores included:1class A(25,000 pounds),2 class B(total-20,000 pounds),or 6-8 class D.
100-1000 mt weapons were considered as warheads for very large ICBMS.Initially Titan3 family was considerated as ICBMS for 100mt warheads(for example Titan3m with gelled propellant).Very large boosters such As Saturn V with storable fuel components (and USAF had plans for solid Saturn V with Aerojet AJ-260),Nova and SLS considerated as
ICBMs.
Thanks!
I bought and read Hansen's "U. S. Nuclear Weapons" 1988, and it is full of errors. He mixes up facts and make believe.
Some of the errors which annoyed me the most were the errors in the data he gives from a preliminary document for the percentage of early fallout at the Redwing tests Zuni, Tewa, Flathead and Navajo (although he very usefully gave the correct percentage fission yields for those tests, 15, 87, 73 and 5% respectively), where he states that the water surface bursts deposited about 30% of their fallout in local fallout while for the land surface bursts it was 48-50%. These percentages of local fallout were debunked in the testimony by Dr Kellogg in the RAND Corp in the June 1957 congressional hearings "The Nature of Radioactive Fallout and Its Effects on Man": they were calculated using an incorrect conversion factor between deposited activity and dose rate. When corrected, the % in local fallout is much higher and more similar for both types of burst.
Other errors Hansen made were the Teller-Ulam mechanism, assuming that X-rays heat up plastic foam filling the radiation channel which then turns to plasma to compress the fusion stage capsule.
Actually, as Glasstone and Dolan's "Effects of Nuclear Weapons" stated since the 1962 edition, the X-rays coming off the primary stage have a very short mean free path and will be blocked. Filling the duct between outer casing and fusion capsule with plastic foam would prevent the H-bomb from working: it would stop the X-rays and turn that energy into a fireball which would diverge outward instead of being focussed inward upon the fusion fuel capsule. This would fail to cause efficient compression because it would turn the bomb into a "layer cake" that just pushes the fusion fuel away from the fission primary stage, instead of efficiently compressing it. Instead of plastic foam filling the X-ray duct, there is empty space allow the X-rays to be channeled effectively and ablate the fusion capsule surface, so that by recoil it gets compressed.
After interviews with the Ivy-Mike bomb designers, Richard Rhodes corrected the situation on page 486 of "Dark Sun" (Simon and Schuster, N. Y., 1996):
"The flux of soft X-rays from the primary would flow down the inside walls of the casing several microseconds ahead of the material shock wave from the primary. ... the steel [OUTER] casing would need to be lined with some material that would absorb the [soft X-ray] radiation and ionize to a hot plasma which could radiate X-rays [in a different direction, like a mirror] to implode the secondary."
So what the plastic foam does is act as a mirroring surface to reflect back X-rays going toward the outer casing, instead of losing that energy by having it ablate the outer casing. What you want to do is reflect those X-rays back on the the fusion fuel capsule in the middle of the radiation channel, so they ablate that, not to ablate the inside of the outer bomb casing! Rhodes on page 501 of "Dark Sun", quoting Mike designer Harold Agnew:
"I remember seeing the guys hammer the big, thick polyethene plastic pieces inside the casing ... They hammered the plastic into the lead with copper nails."
The plastic foam is just one inch thick and is purely a "radiation mirror" for the X-rays; reflecting as much X-ray energy back on to the fuel capsule as possible. The plastic foam doesn't fill the entire casing, it's just a relatively thin (1" thick) layer fixed to inside of the outer case. Rhodes however was still confused and reverts to Hansen's error on page 492, where he says that the plastic foam "would expand rapidly and deliver the necessary shock [to the fusion fuel capsule]". This is untrue; the physical expansion of plastic foam and its "shocking up" into a shock wave takes far longer and exerts far less pressure than the delivery of X-ray energy.
Plastic foam is vital to make the inside of the outer casing into a "radiation mirror" for X-rays. Instead of ablating a metal surface and wasting the energy by transforming it into mechanical kinetic energy of ablating metal vapor and recoil shock in the outer case, because of its low density (compared to a metal) the plastic foam simply heats up and re-radiates the energy it has absorbed as X-rays. This turns it into an excellent mirror for X-rays, since the incident X-ray energy is mostly re-radiated instead of being turned into mechanical shock wave.
To understand this mechanism in slightly different context, see Glasstone and Dolan, "The Effects of Nuclear Weapons" 3rd ed., 1977:
"Two factors affect the thermal energy radiated ... First ... a shock wave does not form so readily in the less dense air [or any less dense medium!]"
Plastic foam is able to mirror X-rays because it is able re-radiate X-ray energy efficiently: its low density slows down the rate of shock wave formation, eliminating that mechanism for energy loss, so plastic foam merely heats up and re-radiates the energy as X-rays.
(The plastic foam "mirroring" of X-ray radiation is vital to the Teller-Ulam design as evidenced by the declassified title of their 9 March 1951 joint Los Alamos LAMS-1225 paper: "On Heterocatalytic Detonations. I. Hydrodynamic Lenses and Radiation Mirrors".)
(The "radiation mirrors" concept is the Teller contribution: this is the key to the whole breakthrough; Ulam's hydrodynamic lenses never worked for the shock wave from the fission primary which is too dense and slow to focus. It is absurd that the one key breakthrough, Teller's radiation mirroring, is completely misunderstood by Rhodes and others, because they don't understand that the difference in density between plastic foam and metal reduces shock wave formation and thus makes plastic into a relatively good radiation mirror.)
Hansen also gives false descriptions of the Hiroshima and Nagasaki devices: the projectile in the gun-type Hiroshima device was a hollow cylinder of U235, not the other way around.
Richard Rhodes also seems to be totally ignorant of nuclear weapons effects where he claims on page 509 that after the Mike shot: "Radioactive mud fell out, followed by heavy rain."
This contradicts the thorough fallout collection data for Eniwetok lagoon in the weapon test report W. R. Heidt, Jr., E. A. Schuert, et al.; WT-615, "Nature, Intensity and. Distribution of Fallout from MIKE Shot", Project 5.4a., USNRDL, 1953. The fallout from Mike wasn't mud or heavy rain but fallout particles formed from coral grains.
On the same page, Rhodes falsely claims that the entire crater volume of 80,000,000 tons became global fallout, when in fact only about 1% was fallout and the explosion didn't have enough energy to lift that mass: Dr Alvin C. Graves testified to the 1957 U.S. Congressional Hearings on "The Nature of Radioactive Fallout and Its Effects on Man" part 1, page 71, that approximately "a megaton of energy will lift up a tenth of a megaton of dirt." Hence 10.4 Mt Mike lifted up just ONE million tons of fallout, not 80.
Rhodes on page 542 of "Dark Sun" reveals his complete ignorance of chemistry by claiming that the fallout was "calcium precipitated from vaporized coral". Duh! Did Rhodes ever go to elementary chemistry class and see what happens to a piece of calcium exposed to the air for a few seconds? It oxidizes into calcium oxide with the release of energy!
Even if he didn't know that, Rhodes should have studied the facts on the fallout collected from Mike in weapon test report WT-615 or the congressional testimony by Triffet in 1959: Mike never reduced a million tons of coral to calcium metal in the first place. Long before the time the fireball had expanded enough to engulf that much coral, its temperature was just enough to reduce some of the coral to calcium oxide, CaO, which was then slaked by atmospheric moisture during the many minutes or hours of the long fallout to give calcium hydroxide (slaked lime). This is why the fallout was an irritant, and led to confusion in AEC Chairman Lewis Strauss's statement after the Marshallese and Japanese were contaminated by Bravo in March 1954.
The AEC pointed out that skin irritation during fallout was a chemical effect of the lime in the fallout irritating skin and eyes, and stated on 11 March that the Marshallese had no beta radiation burns. The first beta burns appeared on 14 March, two weeks after exposure, as is usual for beta ray burns to skin. It made the 11 March statement look like a false statement or a cover-up.
Thank you for this data.
Part3.
Amazing,insane ,but 1000-megaton warhead was not largest ever considerated.In exellent book :Project Orion .The True story of the Atomic spaceship.George Dyson (Freeman son) referce to two weapons.(I might be confused,it might be same weapon).
1)Small:1650-ton continent -buster hanging over enemy head as detterent.Its yield must be
approx.9 gigatons.
"A May 1959 Air Force briefing revealed some possible military uses of Orion vehicle ,including reconnaissance and early warning,electronic countermeasures,anti-ICBM,ICBM,orbitally or Deep space weapons.Finally ,there was The Horrible weapon-1,650-ton continent -buster hanging over enemy head as deterrent.”These proposals were for 4000-ton vehicle.
2)20,000 ton vehicle.This book and
atomicrockets.com.
One mission was considerated-ability for delivering a warhead so large that it would devastate a country one-third of size of the of United States.
Given that territory of US roughly
2000*4000km,and maximum distance for devastation by such weapons by thermal radiation,radius roughly 1000km,and yield must be-
50-60 GT,using formula 0.68*Y^0.4.
So that weapon more powerful than
all nuclear weapons ever built.This bomb was named a DOOMSDAY BOMB and project DOOMSDAY ORION.
Given that was project Orion Battleship,"This one will take the form of a space battle. In 1962 President Kennedy was shown a model of the spaceship as a last-ditch effort to keep the project alive. This abominable concept was for a ship capable of wiping out every Russian city with a population over 200,000 from orbit. Sadly the model has now been lost. Descriptions of it say it was equipped with 5 inch guns for defense, Casaba-Howitzer bombs (a directed-energy nuke), and 500 Minuteman-style 25 megaton bombs. Kennedy, like the scientists involved and any sane person, hated the idea. One year later the Limited Test Ban Treaty was signed and the project was canceled".
Given that each 25-megaton (MK-41) weighted 10500lbs(4750kg),missiles must weighted at least 20tonnes.
I sure that DOOMSDAY BOMB and continent-buster different weapons.
9 gigaton blast could not a devastate a country with size of 1/3 of US,minimum 50-60Gt blast.
All these projects were not materialised due political,but not technical factors(especially due disastrous actions of Mcnamara).
Some information yet.
MK-41 was considerated to be missile warheads at least three times:
1)As alternate warhead for NAVAHO.
2)As warhead onbourd Orion battleship.
3)As single warhead for large Pluto
(USAF ramjet design,also proposed armaments-2x32-inch warheads(10mt each),5-21-inch warheads (5mt each),and 16 -15 -inch warheads(1.5 mt each)).Source-Proceedings of Nuclear Propulsion Conference ,August 15-17,1962.Naval Postgraduate SCHOOL.Monterey,California.AEC.Division of Technical Information.
Try find this report in internet.
PLUTO weighted only 45,000 lbs and was designed for global strike.
Another report for small Pluto design.UCRL-ID-125506.
Weapon total load was 3000-4000lbs.It was consisted Optional configurations range from a single 32-inch diametr warhead,or pair of 21-inch diametr warheads,or as many as six
l5-inch diameter ejectable weapons.
In all cases total yield must be an order
1o megatons.
Part4.
So I think Mk-53 yield biased understimated.There must be two possibilities.
1)Yield total was suppresed,but total fission fraction same as at 18mt(56%).
2)Fission fraction was suprassed from 70-85% to 55%.
Total yield must be in 12-18mt range for 2 reasons :
1)MK-53 was a class B and must same Y/W as Class A(60 megatons),and Class B(25 megatons).
Mk-36 previous generation and its Y/w ratio could not be applied to Mk-53 based on that documents.
2)CIA estimates for warheads on R-36(sS-9)was based on MK-53 and MK-41:light warhead was based on MK-53(RV-8000lbs and warhead-6500lbs,yiled-12-18mt,heavy warhead was based on Mk-41-RV weight-13000lbs,yield up to 25 megatons).
So i sure ,that Mk-53 had a yield of 12-18 megatons.Given that 6400lbs-
2896 kg,2896x5.3=15.348mt.Mk-53 intended have a yield of Mk-21 but in much smaller weight.
For 35mt warhead.
This data on DOE site.Office of declasification.Drawing Back the Curtain of Secrecy
Restricted Data Declassification Policy, 1946 to the Present
RDD-1
June 1, 1994
U.S. Department of Energy, Office of Declassification.
Section D.Thermonuclear weapons.
9. The fact that the yield-to-weight ratios of the new class of weapons would be more than twice that which can now be achieved in the design of very high yield weapons using previously developed concepts. (63-1).
10. The United States, without further testing, can develop a warhead of 50-60 Mt for B-52 delivery." (63-3)
11. "... some improvement in high yield weapons design could be achieved and that new warheads -- for example a 35 Mt warhead for our Titan II -- based on these improvements could be stockpiled with confidence." (63-3).
Another source for 35Mt warhead Mcnamara himself.Time.Atomic arsenal.23 august,1963.
McNamara, while admitting that the treaty, by barring atmospheric testing, would prevent the U.S. from developing a 100-megaton bomb, told the Senators that without any testing the U.S. "can develop a warhead with a yield of 50 to 60 megatons for a B-52 delivery," and with underground tests could develop "a 35-megaton warhead for Titan II."
I think,that you,Nigel known about that weapon.
Initial plan was for deploying-275 Titan II,2915 Minueman,including advanced model with 5mt warhead and 1319 Skybolt.
So I think ,that 35mt was sucessor of Mk-53 and 55Mt warhead sucessor of Mk-41.These weapons have Y/W around 11kt/kg.There also consideration of Advanced Titan II with payload increased to 6t to carriage 55Mt warhead.See :Desmond Ball,Politics and force levels.1980.And refences therein.On Advanced Titan ,The Hickey Study,reference in this book.On 275 TitanII and 2915 Minuteman-Package Plans for Strategic Retaliatory forces ,3 July,1961.
Imagine if Mcnamara never been secretary of defense and that plans were executed.
USAF also studied thrust-increased Titan II ,capable carriyng a 100Mt(40000 lbs) warhead.
I be very happy for your coments on DOOMSAY BOMB,because my calculation based on Glasstone book.I think ,that 35mt warhead used a U-235 around a fusion fuel.
Insane is a good word to describe the 1650-ton, 9 gigaton doomsday machine put in orbit over an enemy using Project Orion's nuclear explosion powered spaceship.
Once you get to gigaton yields, the fire hazard cam become serious. Wood won't ignite directly for yields below 100 Mt because the flash is so brief it just ablates the 0.1 mm of outer surface into a shielding cloud of smoke, which prevents fire as seen in nuclear tests, so you need litter like dry leaves or newspaper to ignite as tinder, then that has to ignite some kind of kindling like cardboard or twigs, and then the kindling if below wood will start a fire: normally this chain is broken and you don't get fires except as in Hiroshima from blast overturned charcoal cooking braziers in paper screen and bamboo furnishing in wooden houses, or with WWII air raid "blackout curtains" (dark coloured curtains which absorb a lot more thermal energy than modern light coloured curtains).
But for gigaton bombs (thousands of megaton), the rate of thermal energy release is too low over large areas to ablate wood; instead the wood is slowly heated and may reach ignition temperature without the need for tinder and kindling in convenient proximity.
In this case, you do get widespread fire hazards, which is what probably caused a climate catastrophe which killed the large cold-blooded dinosaurs (but not smaller cold blooded relatives like tortoises, etc.).
I think that really is an insane kind of weapon, which is why Herman Kahn used such devices as examples of "doomsday" devices in On Thermonuclear War, where the theory of deterrence is applied too far (if something then goes wrong you are then in a real pickle, and make no mistake).
In my latest and possibly last blog post, I've quoted Dyson's book Disturbing the Universe where he was designing extra-clean (low fission yield percentage) bombs for Project Orion and ended up helping Samuel Cohen's neutron bomb project. Dyson is extremely simplistic in everything, although he gets a mature viewpoint to some extent by applying his simplistic analysis from more than one direction. He applies simplistic reasoning to both opposite viewpoints, and by combining the results manages often to get a reasoned evaluation of the clearest arguments on each side of an argument.
The best example is the contrast between Dyson's account of quantum mechanics in his Scientific American article of 1958, "Innovation in Physics", with the account of his arguments with Richard P. Feynman over Feynman's "path integrals" approach in his 1979 book "Disturbing the Universe". Dyson in the 1958 article says quantum mechanics is purely mathematical with nothing pictorial to understand; Dyson in the 1979 book says that in 1948 after Pocono he and Feynman argued about this with Feynman hitting back and saying Einstein's grand unified theory failed because it was just equations with no mechanistic pictorial physics to it (Feynman's famous "diagrams" of quantum field interactions between fundamental particles).
Project Orion was of course cancelled by President Kennedy after Dyson submitted crazy blueprints to Kennedy for a "Star Wars" battle cruiser spaceship. Kennedy was appalled, cancelled funding, and made sure Project Orion was buried by signing the Atmospheric Nuclear Test Ban Treaty to prevent it ever being tested with nuclear explosives. So NASA went the other way with the plaque on the Moon reading: "We came in Peace."
Maybe if Dyson and his comrades had a bit more insight marketing ideas, they wouldn't have tried to sell Orion to Kennedy as a space based warship, but just as a very cheap way to get to Mars, burning up some of the nuclear weapons stockpiles on the way.
Gigaton weapons effects (1000 megatons or more) differ from nuclear weapons below 100 Mt mainly in the thermal and fireball phenomena. At the upper limit of gigaton "doomsday" weapons yields, you get into the kind of global "nuclear winter" phenomena from the K-T impact 65 million years ago which ended the reign of large cold-blooded dinosaurs and gave warm-blooded mammals a chance.
The thermal radiation emission occurs so slowly from yields above a gigaton that it doesn't ablate the surface of wood into a fire-preventing "smoke screen" over large areas like a brief thermal pulse from below 100 Mt. Instead, above a gigaton, the thermal pulse is like a long pulse of extra-intense sunlight which can gradually warm up wood to depth (not just surface heating which causes ablation), and cause wood to ignite directly.
The fireball is also bigger than the 7 km scale height of the atmosphere so that you get massive differences in air density between the top and bottom, causing rapid "ballistic" fireball rise rather than the normal buoyant rise that you get from nuclear tests below 100 Mt at low altitudes.
The many gigatons of the K-T event did cause climatic effects that killed the large cold-blooded dinosaurs and many ocean species which were temperature-sensitive, but it didn't kill cold blooded smaller reptiles or mammals or many species which survived and evolved happily afterwards...
So I think even K-T impact events are exaggerated in their effects. There is evidence that all the large mammals today have evolved from smaller ones left after the K-T impact event. E.g., there were no large mammals 65 million years ago; all the surviving mammals were very small and have since evolved into larger sized mammals. However, the simple, very low technology techniques even mouse sized mammals used to survive the K-T impact could be employed by intelligent people to survive a gigaton explosion.
Like a half filled glass of water, the doom-mongers would view the K-T impact event as an example of the threat of extinction, as if the extinction of crazy big dinosaurs was a bad thing that extrapolates to human extinction threats. Others would see it differently and consider the survival of mammals under such circumstances as evidence of the difficulty in exterminating life and thus the survival possibilities for humans even in the worst events that have ever occurred in the history of the planet.
Because the earth rotates, any global smoke cloud that blocks sunlight and causes "nuclear winter" will be unevenly heated from this factor alone: the sunset and sunrise effects will cause expansion of air and thus winds to unevenly disperse smoke, allowing natural convection to occur, so that rain can be generated.
The burning of vegetation is accompanied by the emission not just of soot and CO_2 but also of water since the mass of most vegetation contains a lot of water, so you will get self-induced rainout when the soot and water vapour rise to high altitudes when the soot absorbs vapour and forms large "black rain" droplets which settle out under gravity, like the black rain at Hiroshima (this is something the doom-mongers are quiet about in the climatic effects context, although happy to hype in false radioactive hazard context; ignoring the low specific radioactivity content of the rain since the radioactive mushroom cloud was blown miles away from the target area half an hour or more before the firestorm even started).
Wind and rainfall will thus disperse and precipitate most of the soot within a week or two. That's a long-enough "winter" spell to kill large cold-blooded dinosaurs which can't take shelter because of their size and which can't metabolize food in low temepratures, so they just come to a half and die; but it's not enough to kill many species which can respond better to low temperatures.
Thank you very much.
But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?I think that 9 gigatons not enough.Doomsday bomb and coninent-buster must be different weapons.Various orions,lagest was 8000,000 ton in weight.Both military and civilian applications.
Apart Orion Mcnamara killed various defensive and offensive systems:Pluto,Dynasoar,B-70,WS-125A,Skybolt,AICBM,Advanced Minuteman,restricted depolyement of strategic forces,F-108,F-12,MK-16(MIRV for Titan II),Sentinel,Bambi and etc,and etc.
"But what a yield aprox. needed to devastate (ignite) 1/3 territory of US?"
Whether the bomb is a space burst or a low altitude burst, at that yield the fireball exceeds the scale height of the atmosphere (7 km) by a large factor so the fireball undergoes ballistic rise as described by Dolan in ENW and CNW. It goes up very quickly, and it radiates for a long time, so it basically radiates from extremely high altitude. It could certainly expose very large areas, although if there were heavy cloud cover between the fireball and the ground, that would mitigate the thermal effects. Air blast would have a very long duration at such yields, but even so it wouldn't be that impressive for a high altitude or space burst owing to the low density of the air: thermal radiation would be the primary effect carrying most of the energy.
If you are asking for the yield needed to "devastate" such an area, you need to define the type of burst (e.g. altitude of burst) and what you mean by "devastate", e.g. what the target is (wood frame Japanese houses with blackout curtains in the windows etc., the flammable 1953 "Encore" nuclear test house full of newspapers with a big window facing ground zero with an unobstructed line-of-sight view, or modern steel and concrete city buildings?).
Many media people and politicians would say that a 1 kt nuclear explosion anywhere in America would "devastate" just about the whole country financially and by fallout contamination, citing the number of hospital beds in the USA compared to the maximum possible numbers of burns casualties, the expense of 100% effective decontamination of large fallout areas, and so on.
Very long thermal pulses result from gigaton yields at high altitude. This means
(1) It becomes possible for the heat pulse to actually cause solid wood to heat up into its depth so it can ignite eventually where the yield is high enough (instead of having merely the outer tenth of a millimetre "blown off" as smoke, without fire).
(2) The long duration of thermal energy delivery (minutes) gives people more time to take cover. Not "ducking and covering" becomes a non-option. Everyone has time to evade a large fraction of the thermal pulse if they have some non-flamable shelter available. Over the widest area (out to the horizon as seen from the edge of the high altitude X-ray "pancake" fireball), the thermal pulse is just like the sun but more intense. So people will be able to avoid injury by taking protective measures that they would take against sunburn, such as going indoors or behind anything that gives some shade.
The biggest nuclear weapon yield I have seen thermal ignition predictions published for is 1,000 Mt (1 gigaton) in volume 1 of Robert U. Ayres's Hudson Institute report HI-519-RR, Environmental Effects of Nuclear Weapons, Fig. 2.1, page 2-3. (The reason wny Ayres considered yields up to 1,000 Mt in this report was probably that the director of HI was Herman Kahn, who was interested in "doomsday devices".)
Ayres finds that 1,000 Mt detonated at 36.6 km altitude might produce 7 cal/cm^2 thermal flux at up to 265 km away on a clear day. However, on the previous page Ayres shows that the energy needed for ignition of newspaper increases with yield, from 7 cal/cm^2 at 1 Mt to 11 at 10 Mt and 25 at 100 Mt. Although ignition of wood is possible for gigaton yields due to the long duration of the heat pulse, you still need a lot of energy to achieve ignition temperatures, which limits the distance.
Document,that avaible stated that at this time (1957) time another Class C weapon was under study-18 megatons in 7,000-pound weight.
AFSWC,Technical report on nuclear weapon development,1957.
"A May 1959 Air Force briefing revealed some "possible military uses of
the Orion Vehicle," including reconnaissance and early-warning,
electronic countermeasures ("possibility to get a terrific number of
jammers over a given area"), anti-ICBM ("possibility of putting many
eary intercept missiles in orbit awaiting use"), and "ICBM, orbital,
or deep space weapons -- orders of magnitude increase in warhead
weights -- clustered warheads -- launch platforms, etc." Finally,
tere was "the Horrible weapon -- 1,650 -ton continent buster hanging
over the enemy's head as a deterrent.
USAF Orion was a special model.
4,000-short gross weight.
250 feet in lengt and 85 in diametr.
ORION ICBM mean a ICBM with 2,000-s.ton throw-weight ,there would be bunch of devices around 1000mt.
Continent-buster=Doomsday bomb.
Weight 1650 short tons.Yield would be >20 ,000 megatons.
It would be exploded over USSr at 400 km altitutede literally turning USSR to Hiroshima.
Hello, Nige.
Are you still visiting this blog so I can ask about some little things that confuse me ?
Post a Comment
<< Home