Saturday, March 26, 2011

Herman Kahn in fact and fiction: what he really thought about LBJ's Vietnam War policies


Above: Herman Kahn's solution to the problem of radioactive strontium contamination in foods after a nuclear war, on page 67 of On Thermonuclear War: you simply survey the foods and restrict to children and expectant mothers the least contaminated food (Food A). Kahn was aware of the iodine-131 and caesium-137 easy countermeasures. Iodine-131 which concentrates in milk has a half life of just 8 days, and you can keep cattle on winter food to avoid pasture contamination, or freeze milk, or turn contaminated milk into long-life powdered milk (that outlives iodine-131), or other long-life dairy products. Caesium-137 has a long physical life but like potassium it doesn't last very long inside people (half is eliminated after 70-140 days). Only strontium, which goes into bone, lasts a long time in the body and produces large doses over decades like radium. However, Kahn's 1959 testimony noted that there is a threshold dose for radium effects, due to the low dose rate over a long period, which allows biological repair of DNA breaks (by DNA repair enzymes like protein P53):

Herman Kahn (RAND Corp.): ... I suggest that we should be willing to accept something like 50 to 100 sunshine units in our children ...

Representative Holifield: We have been using the term “strontium unit” rather than “sunshine.” Some of us are allergic to this term “sunshine”. We prefer the term “strontium”. ...

Senator Anderson: I think that term sunshine came because the first time they said if the fallout came down very, very slowly, that was good for you. And then later they said if it came down very fast, that was good for you. We decided to take the sunshine, in view of everything.

Herman Kahn (RAND Corp.): I prefer not getting into that debate. I deal in a number of controversial subjects, but I try to keep the number down. … But I might point out, no one has ever seen a bone cancer directly attributable to radioactive material in the bone at less than the equivalent of 20 to 30 microcuries. … Ten microcuries of Sr-90 per kg of calcium [an adult has typically 1 kg of bone calcium, so this implies 10,000 strontium units in the bone] would mean a dose of about 20 roentgens a year in the bones.”

- June 1959 U.S. Congressional Hearings on the Biological and Environmental Effects of Nuclear War, page 900.


On pages 899-900 of the June 1959 U.S. Congressional Hearings on the Biological and Environmental Effects of Nuclear War, Herman Kahn testified about the scare-mongering exaggeration that that 10 mCi of Sr-90 per square mile produces 1 pCi of Sr-90 per gram of bone calcium (“1 sunshine unit”), so with the legal limit 100 pCi of Sr-90 per gram of bone calcium, 10 megatons of fission products spread uniformly over the million square miles of U.S. farms would prohibit agriculture for decades. However, as Kahn pointed out, this is a 100-fold exaggeration of area that ignores fractionation, weathering of strontium below the root-uptake depth, and the non-uniformity of fallout deposition (the concentrations in “overkill” hotspots near explosions reduces the 100 strontium unit area to 10% of that estimated for uniform contamination). Kahn then points out on page 900 that there is a simple resolution to this strontium contamination problem. Food with less than 100 pCi of Sr-90 per gram of calcium would be restricted to “children and pregnant mothers”, but food with higher contamination would be given to adults (whose strontium uptake is 8 times smaller than young children’s, because adult bones are fully formed).


Caesium-137 is retained in typical American soils, so there is little uptake by plants and animals. In addition, unlike strontium and iodine, which concentrate in the bones and thyroid, caesium is quickly eliminated from the body at a rate of 50% every 70-140 days. The U.S. Consumers Union in 1961 found that the mean American intake of natural potassium-40 in diet was 4,000 pCi/day, compared to just 50 pCi/day from fallout caesium-137, 10 from strontium-90, and 0.1 from plutonium-239. (Source: Professor Cyril M. Comar, Fallout from Nuclear Tests, 1963, page 24.)

For a good technical debunking of low-level radiation media hype scare-mongering please see: http://www.broadinstitute.org/~ilya/alexander_shlyakhter/92h_radiation_risk_leukemia_cancer.pdf.

At low dose rates, you can take vast doses of radiation spread over a period of decades; it's only when you receive the dose too quickly for DNA repair enzymes to fix correctly that you get in trouble. So it's the radiation "dose rate", not the "dose", that actually determines the hazard or benefit. The Oak Ridge National Laboratory megamouse project run by Dr Russell in the 1960s (where 7 million of mice were exposed to various dose rates to get statistically reliable cancer and genetic effects data) clearly showed that the linear no-threshold dogma from Edward Lewis and others at the 1957 fallout hearings was wrong. Female mice had a dose rate threshold of 0.54 cGy/hour for an increase in the mutation rate. That's massive, 54,000 times natural background. The 1950s data was based on maize plants and Muller's fruitflies, which don't have long timespans and so don't have elaborate DNA repair enzyme systems to repair DNA breaks.

“Today we have a population of 2,383 [radium dial painter] cases for whom we have reliable body content measurements. . . . All 64 bone sarcoma [cancer] cases occurred in the 264 cases with more than 10 Gy [1,000 rads], while no sarcomas appeared in the 2,119 radium cases with less than 10 Gy.”

- Dr Robert Rowland, Director of the Center for Human Radiobiology, Bone Sarcoma in Humans Induced by Radium: A Threshold Response?, Proceedings of the 27th Annual Meeting, European Society for Radiation Biology, Radioprotection colloquies, Vol. 32CI (1997), pp. 331-8.


Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), discrediting Lewis’s linear-no threshold theory on pages 61, 78-79:

“... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls. ... All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis's work as others have done, 'It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation' [Leading article in Science, vol. 125, p. 963, 1957]. This is just wishful journalese. The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to atom bombs, where the radiations are qualitatively different and, more important, the dose-rate outstandingly different.”



Above: the old "linear, no threshold (LNT)" radiation effects law popularized from non-DNA repair organisms (fruit flies!) by geneticist Edward Lewis in 1957 needs to be replaced by one that does not just describe excess risks from total doses, irrespective of time. The new law takes account of the rate of repair of damage by DNA repair enzymes as a function of dose rates, upon not just excess cancer risks, but also natural cancer and genetic rates. If radiation dose rate R is applied constantly over decades, then the absolute (total, including natural incidence) cancer and genetic defects risk, X, will be simply the sum of two terms:

X = Ae-BR + CR,


where the first term Ae-BR represents the natural cancer or genetic risk (which falls exponentially as the radiation dose rate rises, due to enhanced metabolism being devoted to DNA repair enzymes) and the second term CR represents the old "linear no-threshold" law for radiation damage that escapes repair (which in the Cold War was the only term assumed to exist, based on an ignorance of the DNA repair). Therefore, the new law is just (1) the addition of a term for DNA repair effects and (2) a reformulation for absolute (total) cancer risk, rather than just the presumed "excess" risk above the natural incidence! (The constant A is easily determined from the natural incidence for the cancer and genetic risk, the constant B is determined by the new data on radium dial painters and the Taiwan incident, while the constant C can be estimated for various dose rates from the old "linear, no-threshold" theory.)

See the article by Doogab Yi, “The coming of reversibility: The discovery of DNA repair between the atomic age and the information age”, Historical Studies in the Physical and Biological Sciences, v37 (2007), Supplement, pp. 35–72:

“This paper examines the contested ‘biological’ meaning of the genetic effects of radiation amid nuclear fear during the 1950s and 1960s. In particular, I explore how the question of irreversibility, a question that eventually led to the discovery of DNA repair, took shape in the context of postwar concerns of atomic energy. Yale biophysicists who opposed nuclear weapons testing later ironically played a central role in the discovery of DNA excision repair, or ‘error-correcting codes’ that suggested the reversibility of the genetic effects of radiation. At Yale and elsewhere, continuing anticipation of medical applications from radiation therapy contributed to the discovery of DNA repair. The story of the discovery of DNA repair illustrates how the gene was studied in the atomic age and illuminates its legacy for the postwar life sciences. I argue that it was through the investigation of the irreversibility of the biological effects of radiation that biologists departed from an inert view of genetic stability and began to appreciate the dynamic stability of the gene. Moreover, the reformulation of DNA repair around notions of information and error-correction helped radiobiologists to expand the relevance of DNA repair research beyond radiobiology, even after the public concerns on nuclear fallout faded in the mid-1960s.”


In another post, we examine in detail the May-June 1957 Hearings Before the Special Subcommittee on Radiation of the Joint Committee on Atomic Energy, U.S. Congress, The Nature of Radioactive Fallout and Its Effects on Man, where the false dose-threshold (not dose rate-threshold) theory was publically killed off (in a political-journalism scrum sense, not a scientific evidence sense) by a consortium of loud-mouthed and physically ignorant fruitfly and maize geneticists (headed by Nobel Laureates Muller and Lewis), with only an incompetent and quiet defense for the scientific data from cancer radiotherapy experts with experience that high dose rates cause more damage than low dose rates. The argument they made was that genetic effects of radiation on fruitflies and maize showed no signs of dose rate effects or dose threshold effects. They they extrapolated from flies and maize to predict the same for human beings, and they also claimed that this genetic result should apply to all normal cell division (somatic) radiation effects not just genetic effects! Glasstone summarized this linear-no threshold theory on page 496 of the 1957 edition of The Effects of Nuclear Weapons:

"There is apparently no amount of radiation, however small, that does not cause some increase in the normal mutation frequency. The dose rate of the radiation exposure or its duration have little influence; it is the total accumulated dose to the gonads that is the important quantity."


Flies and seasonal plants don't need DNA repair enzymes, which is why they show no dose rate dependence: they simply don't live long enough to get a serious cancer risk caused by DNA copying errors during cell fissions. This is not so in humans, and even mice. Glasstone and Dolan write in the 1977 edition of The Effects of Nuclear Weapons, pages 611-612 (paragraphs 12.209-12.211):

"From the earlier studies of radiation-induced mutations, made with fruitflies, ... The mutation frequency appeared to be independent of the rate at which the radiation dose was received. ... More recent experiments with mice, however, have shown that these conclusions must be revised, at least for mammals.

"... in male mice ... For exposure rates from 90 down to 0.8 roentgen per minute ... the mutation frequency per roentgen decreases as the exposure rate is decreased.

"... in female mice ... The radiation-induced mutation frequency per roentgen decreases continuously with the exposure rate from 90 roentgens per minute downward. At an exposure rate of 0.009 roentgen per minute [0.54 roentgen/hour], the total mutation frequency in female mice is indistinguishable from the spontaneous frequency. There thus seems to be an exposure-rate threshold below which radiation-induced mutations are absent or negligible, no matter how large the total (accumulated) exposure to the female gonads, at least up to 400 roentgens."


The Oak Ridge Megamouse Radiation Exposure Project

Reference: W. L, ”Reminiscences of a Mouse Specific-Locus Test Addict”, Environmental and Molecular Mutagenesis, Supplement, v14 (1989), issue 16, pp. 16–22.


The source of Glasstone and Dolan’s dose-rate genetic effects threshold data (replacing the fruitfly insect and maize plant data of Muller, Lewis and other 1950s geneticists who falsely extrapolated directly from insects and plants to humans) is the Oak Ridge National Laboratory “megamouse project” by Liane and William Russell. This project exposed seven million mice to a variety of radiation situations to obtain statistically significant mammal data showing the effects of dose rate upon the DNA mutation risk (which in somatic cells can cause cancer). Seven different locus mutations were used, which showed a time-dependence on genetic risk from different dose rates, which could only be explained by DNA repair processes. This contradicted insect and plant response, which showed no dose rate effect on the dose-effects response. With the results of this enormous mammal radiation exposure project, observed human effects of high dose rates and high doses could be accurately extrapolated to humans, without using the false linear, no-threshold model that applies to insects and plants that lack the advanced DNA repair enzymes like P53 in mammals:

“As Hollaender remembers it: ‘Muller and Wright were the only two geneticists who backed the mouse genetics study. The rest of the geneticists thought we were wasting our time and money!’”

- Karen A. Rader, “Alexander Hollaender’s Postwar Vision for Biology: Oak Ridge and Beyond”, Journal of the History of Biology, v39 (2006), pp. 685–706.


For an interesting discussion of the way that the radiation controversy led to a change in thinking about DNA, from being a fixed chemical structure (as believed in 1957, after the structure DNA was discovered in its misleadingly non-cellular solid crystal form, which was required for X-ray diffraction analysis) to today’s far more dynamic picture of DNA in the cell nucleus as a delicate strand that is repeatedly being broken (several times a minute) by normal water molecular Brownian motion bombardment at body temperature, and being repaired by DNA repair enzymes like protein P53, see the article by Doogab Yi, “The coming of reversibility: The discovery of DNA repair between the atomic age and the information age”, Historical Studies in the Physical and Biological Sciences, v37 (2007), Supplement, pp. 35–72.


‘... it is important to note that, given the effects of a few seconds of irradiation at Hiroshima and Nagasaki in 1945, a threshold near 200 mSv may be expected for leukemia and some solid tumors. [Sources: UNSCEAR, Sources and Effects of Ionizing Radiation, New York, 1994; W. F. Heidenreich, et al., Radiat. Environ. Biophys., vol. 36 (1999), p. 205; and B. L. Cohen, Radiat. Res., vol. 149 (1998), p. 525.] For a protracted lifetime natural exposure, a threshold may be set at a level of several thousand millisieverts for malignancies, of 10 grays for radium-226 in bones, and probably about 1.5-2.0 Gy for lung cancer after x-ray and gamma irradiation. [Sources: G. Jaikrishan, et al., Radiation Research, vol. 152 (1999), p. S149 (for natural exposure); R. D. Evans, Health Physics, vol. 27 (1974), p. 497 (for radium-226); H. H. Rossi and M. Zaider, Radiat. Environ. Biophys., vol. 36 (1997), p. 85 (for radiogenic lung cancer).] The hormetic effects, such as a decreased cancer incidence at low doses and increased longevity, may be used as a guide for estimating practical thresholds and for setting standards. ...

‘Though about a hundred of the million daily spontaneous DNA damages per cell remain unrepaired or misrepaired, apoptosis, differentiation, necrosis, cell cycle regulation, intercellular interactions, and the immune system remove about 99% of the altered cells. [Source: R. D. Stewart, Radiation Research, vol. 152 (1999), p. 101.] ...

‘[Due to the Chernobyl nuclear accident in 1986] as of 1998 (according to UNSCEAR), a total of 1,791 thyroid cancers in children had been registered. About 93% of the youngsters have a prospect of full recovery. [Source: C. R. Moir and R. L. Telander, Seminars in Pediatric Surgery, vol. 3 (1994), p. 182.] ... The highest average thyroid doses in children (177 mGy) were accumulated in the Gomel region of Belarus. The highest incidence of thyroid cancer (17.9 cases per 100,000 children) occurred there in 1995, which means that the rate had increased by a factor of about 25 since 1987.

‘This rate increase was probably a result of improved screening [not radiation!]. Even then, the incidence rate for occult thyroid cancers was still a thousand times lower than it was for occult thyroid cancers in nonexposed populations (in the US, for example, the rate is 13,000 per 100,000 persons, and in Finland it is 35,600 per 100,000 persons). Thus, given the prospect of improved diagnostics, there is an enormous potential for detecting yet more [fictitious] "excess" thyroid cancers. In a study in the US that was performed during the period of active screening in 1974-79, it was determined that the incidence rate of malignant and other thyroid nodules was greater by 21-fold than it had been in the pre-1974 period. [Source: Z. Jaworowski, 21st Century Science and Technology, vol. 11 (1998), issue 1, p. 14.]’


- Zbigniew Jaworowski, 'Radiation Risk and Ethics: Health Hazards, Prevention Costs, and Radiophobia', Physics Today, April 2000, pp. 89-90.

Protein P53, discovered only in 1979, is encoded by gene TP53, which occurs on human chromosome 17. P53 also occurs in other mammals including mice, rats and dogs. P53 is one of the proteins which continually repairs breaks in DNA, which easily breaks at body temperature: the DNA in each cell of the human body suffers at least two single strand breaks every second, and one double strand (i.e. complete double helix) DNA break occurs at least once every 2 hours (5% of radiation-induced DNA breaks are double strand breaks, while 0.007% of spontaneous DNA breaks at body temperature are double strand breaks)! Cancer occurs when several breaks in DNA happen to occur by chance at nearly the same time, giving several loose strand ends at once, which repair proteins like P53 then repair incorrectly, causing a mutation which can be proliferated somatically. This cannot occur when only one break occurs, because P53 will reattach them correctly.





Above: a new edition of Herman Kahn's 1965 nuclear weapons classic, On Escalation, was published last year with a foreword by Kahn's strategist friend Thomas C. Schelling. There is a great deal of pedantic and bureaucratic "fluff" in Kahn's books (including his vertical 44-rung escalation ladder, which should now be replaced by an "escalation tree", because escalation can of course branch off in various directions - such as to the economic collapse of the USSR - rather than being a situation where "all roads lead to Rome", or to thermonuclear war), including the invention and definition of new jargon and other semantics, so his briefer June 1959 Congressional testimony is more concise and enlightening.



The most interesting feature of the 2010 edition is the inclusion of Kahn's January 1968 "Foreword to the paperback edition", giving Kahn's strong views on the failure of escalation in the Vietnam war. Kahn blamed the failure of the Vietnam campaign on a misunderstanding of the book by politicians. The book is not about winning a war but preventing escalation to city-busting collateral damage actions, and is largely based on a study of escalation by Britain and Germany during WWII (see Table 2 on page 29 of On Escalation)- not about the escalation necessary to favorably end a war, such as the use of nuclear weapons against Hiroshima and Nagasaki in August 1945. The Hudson Institute research reports behind Kahn's On Escalation were commissioned by the Martin Company to de-escalate the arms race.

This is quite a different proposition from ending a hot war! As Kahn points out, if you're in a war and want to win, you don't de-escalate, you don't tell the enemy your limitations and guarantee to the enemy what weapons you won't be using (nuclear), and you don't take things slowly. In other words, to win a hot war you do the exact opposite of the strategy you must use to de-escalate an arms race from turning to city-busting mass destruction. If you slow things down during a hot war, you give the enemy time to re-group, recover, and go on fighting.

So what happened with Vietnam was that Kahn's well-publicised 1965 On Escalation ideas for preventing escalation to nuclear war were misapplied from the Cold War arms race to the hot fighting during the Vietnam war. As mentioned in a previous post, Kahn had the same problem of popular misunderstanding with his earlier 1960 book On Thermonuclear War, written while he was still at the RAND Corporation (before he started the Hudson Institute). Pseudo-critics like Scientific American's lawyer James R. Newman seized on On Thermonuclear War's page 20, Table 3, "Tragic but distinguishable postwar states", and then claimed falsely that Kahn was advocating a preventative war or trying to downplay the consequences. As his text under the table shows, this misrepresentation of Kahn's objective is the opposite of the reality. Kahn's Table 3 was not trying to "play down" nuclear war, but to show the wide range of possibilities for different scenarios of all-out war, and how the GNP economic recovery time varies by a factor of 100 as a function of the amount of city damage involved, which emphasises the need to avoid escalating a nuclear war beyond military (counterforce) attacks into the city-busting (countervalue) domain.



Above: Herman Kahn's 1965 book On Escalation from pages 25 to 33 examined: "An Example of Restraint and Negotiation in Total War (World War II)". Kahn's point was that even when dealing with Adolf Hitler's Nazis, the predicted all-out immediate destruction of London did not occur when Britain declared war on Germany (Germany did not declare war first) in September 1939. Unless an enemy decides to launch a surprise pre-emptive strike like Japan's "Operation Al" against Pearl Harbor on 7 December 1941, there is escalation. Kahn's On Thermonuclear War at page 412 blames Pearl Harbor's unpreparedness on the American complacency that Pearl Harbor was 30-40 feet deep compared to 75-150 feet depth of water traditionally required for the operation of torpedoes: "Admiral Onishi immediately grasped that the heart of the problem lay in achieving surprise and in developing techniques for exploiting the surprise by launching torpedoes in shallow water ... Instead of admiring the clear way in which nature had protected the carriers, he succeeded in his program, actually developing torpedoes that could be used in the shallow waters of Pearl harbor." Even with a pre-emptive surprise attack, the aim of such an attack is to try to destroy military targets rather than civilian cities. So it is a counterforce attack, not a countervalue attack. Because harmful nuclear effects, including blast, heat flash and fallout radiation, fall off very quickly with distance (the exception is EMP), such a military pre-emptive surprise attack is not in itself a cause of mass destruction of the civilian infrastructure. Al Queda-style terrorist attacks are an example of mass destruction in a surprise attack with no escalation. But this example doesn't disprove Kahn's escalation analysis in other situations, such as when dealing with the Nazi dictator Hitler.

In analysing the escalation between Britain and Germany to countervalue city bombing in World War II, Kahn on page 31 emphasises the problem of the accuracy of bombing military targets, and the influence of the decision by each side to attack at night to reduce the effectiveness of the other side's anti-aircraft guns and fighter defenses. The night-time bombing was relatively very inaccurate, unable to target city factories without collateral damage. Once collateral damage was done by one side, the other side would retaliate with general anti-civilian area bombing, thus escalating the use of bombers from military to civilian targets.

On page 32, Kahn quotes evidence that in 1935, Hitler had proposed limits to aerial bombing and advocated a policy of tactical use of bombers for military purposes, and on page 34 he argues: "if the British had known what the Blitz would be like, they would not in 1939 have been restrained by fear of a 'knockout blow'." What he omits is the influence on the public (and British politicians) of the media hype and lying propaganda about the 26 April 1937 bombing of Guernica during the Spanish Civil War, where terrific destruction was done on a town with no proper civil defence.

Additionally, in a contribution to Seymour Melman's 1962 otherwise anti-civil defense compendium, No Place to Hide, Kahn explained that one of the greatest exaggerations of bombing effects before World War II was psychological casualties: "That even skilled psychiatrists can be mistaken is shown by their predictions prior to World War II that if London were bombed, the psychological casualties would outnumber the physical by three to one. (Reference: Richard M. Titmuss, Problems of Social Policy, H. M. Stationery Office, London, 1950, page 20.)" The problems of British exaggerations of bombing effects before WWII have been gone into at length in previous posts on this blog. First, they assumed that exaggerations were "erring on the side of caution", when in fact the exaggerations were just plain lies with massive negative effects - millions killed due to the failure to deter the Nazis in time to prevent WWII through being coerced (by fear of exaggerated bombing effects, ignoring civil defense countermeasure effectiveness). Second, they assumed all the same things that nuclear age exaggerators assumed.

Before WWII took casualty rates for people standing in the open watching the bombs fall in July 1917, just as decades later the exaggerating nuclear war casualty "predictions" (both of unclassified reports by civil defense authorities, and propaganda by civil defense critics) assume casualty rates applying for people standing in the open watching the nuclear bombs fall on Hiroshima and Nagasaki. It's like using the casualty rates for the first use of machine guns in war, when people didn't know to drop to the ground but remained "sitting duck targets", standing in the open. As with conventional weapons, any duck and cover protects against burns, flying glass, and blast winds displacement. Another tactic of the ban-the-bomb movement is to focus on individual cases, horrific photographs of people with 100% area burns, who were burned in the firestorm hours after the explosions (the photos being lyingly presented as thermal flash casualties, despite the proved facts about the thermal flash line-of-sight directionality at all ranges in Hiroshima). You could more honestly print photos of gasoline-burned bodies from peacetime car accidents, in a ban-the-car campaign. However, the public are accustomed to accept lies on nuclear weapons effects. It's all a payback for the initial secrecy of nuclear weapons.

Responding to glib anti-civil defense claims that "brave men never hide in holes", Kahn's 1962 contribution retorts: "as many combat veterans of World War II and the Korean conflict can attest, under a wide range of circumstances, brave men do and should hide in holes." Kahn in that 1962 contribution reprints the following satirical compilation of spurious anti-lifeboat (anti-civil defense) theses in an 30 October 1961 letter to the Harvard Crimson, calling it "Perhaps the best, if somewhat satirical, summary of the arguments against civil defense measures," and adding: "To make the satire more complete: the added weight of lifeboats will no doubt increase the risk that the ship might sink of its own accord."





Above: Kahn's 1960 On Thermonuclear War table for different USSR-US nuclear war scenarios, emphasising how variations in targetting and civil defense produce different levels of destruction, with varying economic recovery times to pre-war GNP. The recovery time relationship to city destruction is based on resolving the country into two units: urban and rural. The idea is that the surviving rural population can restore economic growth after the attack at an exponential rate, as occurred when the USSR recovered in 6 years after 33% of their economy was devastated in WWII (as Kahn states on page 132). As Herman Kahn emphasised himself in his June 1959 testimony to the U.S. Congressional Hearings on the Biological and Environmental Effects of Nuclear War, this assumption assumes a particular set of conditions itself. For instance, a long protracted war that goes on for decades like the Hundred Years' War would not be conducive for rapid economic recovery until the war had ended.



Above: one oft-repeated false attack on Kahn was the "missile gap" controversy. On page 197 of his 1960 On Thermonuclear War, Kahn argues that with 125 ICBMs, Khrushchev could have launched a first-strike against SACs 25 "soft" air bases in 1957 with an odds-on chance of crippling American defenses scot-free. As we now know, Russia had only 4 prototype ICBMs available then, which were probably too inaccurate even for soft targets like aircraft. However, Russia was in a better state that America, whose Vanguard missile program was literally blowing up on the launchpad in embarrassing failures. The American B-52 long-range bombers were in fact kept lined up on SAC air bases like sitting ducks, blast-vulnerable targets to a surprise missile attack, and it would only take 25-40 minutes for ICBMs launched from fixed Russian silos to reach their targets in America. However, it was Albert Wohlstetter at the RAND Corp who played up the missile gap to argue for a second-strike capable Triad of American airborne alert ready bombers, missiles in hardened silos, and SLBMs in submarines hidden at sea, to stabilize deterrence against needing to "launch on warning". In particular, many forget that President John F. Kennedy hyped the unclassified exaggerated missile gap to win election over Nixon, who was disadvantaged in Eisenhower's Administration knew the secret intelligence that the missile gap was exaggerated. After Kennedy was informed in January 1961 that the missile gap was exaggerated, he authorized the ill-fated 17-19 April 1961 Bag of Pigs invasion of Cuba which failed to remove Fidel Castro, then - diverting attention from the failure in Cuba - on 25 May 1961 Kennedy made his famous "moon in this decade" speech to Congress, which had in fact more to say about the need for civil defense and American military assistance in the Vietnam conflict, than merely a trip to the moon:

President John F. Kennedy
Delivered in person before a joint session of Congress
May 25, 1961 ...

No role in history could be more difficult or more important. We stand for freedom. ... I am here to promote the freedom doctrine. The great battleground for the defense and expansion of freedom today is the whole southern half of the globe - Asia, Latin America, Africa and the Middle East - the lands of the rising peoples. Their revolution is the greatest in human history. They seek an end to injustice, tyranny, and exploitation. ... theirs is a revolution which we would support regardless of the Cold War, and regardless of which political or economic route they should choose to freedom. For the adversaries of freedom did not create the revolution; nor did they create the conditions which compel it. ... They send arms, agitators, aid, technicians and propaganda to every troubled area. But where fighting is required, it is usually done by others - by guerrillas striking at night, by assassins striking alone - assassins who have taken the lives of four thousand civil officers in the last twelve months in Vietnam alone - by subversives and saboteurs and insurrectionists, who in some cases control whole areas inside of independent nations. ... We stand, as we have always stood from our earliest beginnings, for the independence and equality of all nations. This nation was born of revolution and raised in freedom. And we do not intend to leave an open road for despotism. ...

We would be badly mistaken to consider their problems in military terms alone. For no amount of arms and armies can help stabilize those governments which are unable or unwilling to achieve social and economic reform and development. Military pacts cannot help nations whose social injustice and economic chaos invite insurgency and penetration and subversion. The most skillful counter-guerrilla efforts cannot succeed where the local population is too caught up in its own misery to be concerned about the advance of communism. ... in Laos, Vietnam, Cambodia, and Thailand, we must communicate our determination and support to those upon whom our hopes for resisting the communist tide in that continent ultimately depend. Our interest is in the truth. ...

One major element of the national security program which this nation has never squarely faced up to is civil defense. This problem arises not from present trends but from national inaction in which most of us have participated. In the past decade we have intermittently considered a variety of programs, but we have never adopted a consistent policy. Public considerations have been largely characterized by apathy, indifference and skepticism ... this deterrent concept assumes rational calculations by rational men. And the history of this planet, and particularly the history of the 20th century, is sufficient to remind us of the possibilities of an irrational attack, a miscalculation, an accidental war, which cannot be either foreseen or deterred. It is on this basis that civil defense can be readily justifiable - as insurance for the civilian population in case of an enemy miscalculation. It is insurance we trust will never be needed - but insurance which we could never forgive ourselves for foregoing in the event of catastrophe.

Once the validity of this concept is recognized, there is no point in delaying the initiation of a nation-wide long-range program of identifying present fallout shelter capacity and providing shelter in new and existing structures. Such a program would protect millions of people against the hazards of radioactive fallout in the event of large-scale nuclear attack. Effective performance of the entire program not only requires new legislative authority and more funds, but also sound organizational arrangements.

Therefore, under the authority vested in me by Reorganization Plan No. 1 of 1958, I am assigning responsibility for this program to the top civilian authority already responsible for continental defense, the Secretary of Defense ... no insurance is cost-free; and every American citizen and his community must decide for themselves whether this form of survival insurance justifies the expenditure of effort, time and money. For myself, I am convinced that it does. ...

Finally, if we are to win the battle that is now going on around the world between freedom and tyranny, the dramatic achievements in space which occurred in recent weeks [Russian Yuri Gagarin became the first person to orbit the earth on 12 April 1961] should have made clear to us all, as did the Sputnik in 1957, the impact of this adventure on the minds of men everywhere, who are attempting to make a determination of which road they should take. ... I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.


According to page 72 of Robin Clarke's 1971 Science of War and Peace, a Gallup poll "almost immediately" after this 25 May 1961 speech by Kennedy showed that 58% of Americans opposed Kennedy's plan to visit the moon before 1970. Both the American and Russian space exploration efforts began with Drs Dornberger and von Braun Nazi V2 missile research at Peenemünde on the Baltic Coast. The first V2 prototype was tested on 3 October 1942. The V2 had a one ton warhead (which could easily carry modern thermonuclear weapons), and was accelerated to a maximum velocity of 3,600 miles/hour by a rocket with 28 tons of thrust which used 10 tons of fuel (alcohol and liquid oxygen), giving it a range of 200 miles. Operation Paperclip recruited 127 German rocket scientists to White Sands Missile Range where they rebuilt and tested V2s, but it was not until the Korean War broke out in 1950 that von Braun was sent to the Redstone Arsenal in Alabama and authorized to develop a U.S. Army missile with a 500 miles range. Finally in Octover 1953, the Teapot Committee chaired by mathematician and computer programmer Dr John von Neumann studied the possibility of placing a "dry" (lithium-deuteride fusion fuelled) H-bomb on an ICBM. The first American H-bomb test in 1952, Mike, was a liquid deuterium unit requiring a large refrigeration plant to keep it liquid, but in 1954 various types of dry lithium-deuteride H-bombs were successfully tested at Operation Castle.

The missile gap threat between America and Russia was routinely dismissed by the media as scare-mongering until on 4 October 1957 the world's first artificial satellite, Sputnik 1 orbited the earth at 18,000 miles/hour, 215-939 km altitude, for 3 months. It was an 84 kg spacecraft containing a 20 and 40 MHz radio transmitter, and demonstrated that Russia had then achieved more successful rocket technology than America (echos of the technological successes that Nazi scientists had achieved with their V1 cruise missile and V2 IRBM). But what really worried Kahn was that, less than a month after Sputnik 1 was launched, Russia launched Sputnik 2 on 3 November 1957, a massive 4 metre high 508 kg cone-shaped spacescraft containing a living dog (Laika), which proved that they had the capability to send masses equivalent to nuclear warheads. In December 1957, America tried to match the Russian attempt on a more modest scale using its over-hyped Vanguard missile (trying to launch a measly 2 kg satellite), but failed in front of the world's media when the missile exploded after it had ascended just 2 metres. These events gave more credibility to the "missile gap" fear, but secret American U2 aircraft reconnaissance over the USSR did not substantiate fears of a large number of missile silos.

America however kept up its U2 aircraft reconnaissance which did, of course, end up discovering a real missile threat in Cuba in October 1962. Following Kennedy's failure with the Bay of Pigs invasion of Cuba in 1961, Premier Khrushchev placed 42 SS-4 IRBMs (loaded with megaton thermonuclear warheads), 9 Frog tactical nuclear missiles, and 43,000 Soviet servicemen in Cuba by October 1962. Their anti-aircraft capabilities were demonstrated when they shot down and killed Major Rudolf Anderson, who was on a U2 reconnaissance flight over Cuba on 27 October 1962. The time factor is extremely important here, because it increases the chance of a surprise attack succeeding. Bombers take 15 hours to travel from Russian to America, but an ICBM takes only 25-40 minutes, while it takes under 7 minutes for an IRBM launched in Cuba to detonate its warhead in America! Kennedy responded by a quarantine of Cuba backed up with 90 American ships, 50 nuclear armed B-52 bombers, and 136 liquid-fuelled, ground launched missiles which were prepared for use in the event of the launch of an IRBM from Cuba.

Kahn's Vietnam War criticisms in his January 1968 Foreword to the Paperback Edition of On Escalation

Kahn writes on pages xiii-xiv of the January 1968 new foreword that his 1965 edition's "escalation ladder" (discussed in the earlier blog post linked here) was not designed to be a template for "predicting infallibly that this or that event will happen, but only in describing a range of possibilities ... It is quite clear, however, that all the choices noted on the ladder actually can exist and might even be adopted by one side or the other in an escalatory confrontation. ... Also, it is perhaps important to recognise, even before the start of a severe crisis, the possible opportunities for bargaining, coercion, crisis abatement, and intra-war deterrence which might occur. ... History is all too full of crises escalating into major wars which might have been avoided if one or several of the participants had not foolishly foreclosed their options at such early dates. It is hard to overstate the importance of understanding the range of possibilities in advance of an actual crisis as it may be too late to work out and implement many of these options during an actual crisis or war. Indeed, President Kennedy made the point that if he had not had six days between the confirmation of the presence of Soviet missiles in Cuba and the disclosure to the Soviets and the world that we had this knowledge and intended to act, he could not have worked out the blockade tactics which worked with reasonable success in the Cuban Missile Crisis."

On pages xiv-xvi of the January 1968 Foreword, Kahn discusses the failure of escalation theory in the Vietnam War, where he writes:

It is clear that no theory can guarantee improved performance in a competitive situation, particularly if the opponent is also using a good theory - perhaps the same one. It is also important to understand that if one nation tries to use the threat of escalation to coerce an opponent, it probably will be more effective in exerting psychological and political pressure if it does not seem to depend too explicitly on any specific 'escalation theory'. Indeed, it probably is a serious error to look like 'one has read a book'.

... an example of this kind of mistake can be seen in the American escalation in Vietnam which has given the appearance that United States' decision-makers are following an easily fathomed recipe [Ronald Reagan made this very point to Robert Scheer about LBJ's Vietnam policies, quoted by Scheer in his book Not Enough Shovels; Reagan Reagan said LBJ bent over backwards to promise the Vietcong and its communist arms suppliers that he wouldn't ever even think about using nuclear weapons, a promise that was a strong boost to Vietcong morale, like Truman's "secret" similar promise during the Korean War to British PM Attlee, which leaked through the British Foreign Office spy ring to Moscow and thence to the North Koreans, protracting the war until Eisenhower reversed the policy, bringing about a truce by taking away the promise of continued security]. In particular, the following characteristics of the escalation have had contraproductive aspects:

... No moves have been made which threaten the continued existence of the Hanoi [North Vietnam, Vietcong base] regime. In fact, the United States' decision-makers have gone to some pains to make explicit that this is not its intention. ...

... It seems likely that North Vietnam can have the bombing stopped almost at will, either by agreeing to a corresponding de-escalation on its part - or perhaps just by indicating a willingness to start extended negotiations. ...

... The very gradualness of the escalation not only does not provide any salient pressure point for Hanoi to give in, but probably increases Hanoi's self-estimated threshold of what they can bear by showing them by actual but gradual experience how much they can take, and by making clear to all - friends, neutrals, and opponents - that their collapse, if any, would be due to a general failure of will and not the specific result of a given attack or fear of passing some point of no return.

... this seemingly super-conscious, super-controlled use of escalatory tactics probably has been a serious source of weakness ... it is important to realize that the tactics used have entailed important political and perhaps moral costs to the U.S. and not as great pressures on North Vietnam and its allies to compromise as less gradual or less apparently controlled tactics might have had. ...


In his footnote on page xvi, Kahn suggests that strategic war-gaming exercises by people in "high office" were a possible cause of this too-gradual escalation in Vietnam: "If the game is made sufficiently dramatic for the individuals involved to be concerned at making awesome if simulated decisions, there is an almost overwhelming tendency for American players to inch up on the scale of violence rather than to jump to a high level."

Kahn finishes the January 1968 Foreword with a discussion of the "self-fulfilling prophecy" objection to "thinking about the unthinkable". This was one of the main "criticisms" of those who object to civil defense, and to planning against disasters generally. If you do a first aid course, they argue, you're setting yourself up to be more careless and have an accident deliberately so you can find a quick use for your new found skills. Kahn states on page xvii that he "once rejected this kind of argument as being analogous to the kind of superstition that plagues primitive tribesmen ..."

[To be continued when time permits. Kahn does go on to write extensively about the need to think about, plan for, and be concerned with the realistic facts concerning terrible possibilities. He makes the point that the "anti-war" propagandarists who claim that thinking about war causes problems, do just that themselves: the only difference is that they think about terrible things in shoddy non-fact based way. You don't have a choice about thinking about, and planning for, terrible things. You must do so, or you'll risk making things worse. Once you accept that, you then have a choice of going down the path of the anti-war league who were the appeasers of Hitler before WWII - exaggerating war effects and underplaying civil defense in the belief that war is the only danger and that "peace at any cost" is vital. E.g., collaborating with genocidal racists is - to many - a higher "price" than casualties in a war. So you have a choice between believing lies, or finding out the solid facts. Kahn concedes that there are examples of unlikely wars which are best not planned for, e.g. wars between allies like Britain and America or Canada and America. But where there is a real conflict and a real danger of disaster, then in that case you must face the facts.]

Saturday, March 12, 2011

Incidents at Japan’s Fukushima Daiichi nuclear reactor Units 1, 2, and 3, and fire at Unit 4, after the 11 March 2011 Sendai earthquake and tsunami

"Results provided recently by the Japanese authorities range up to 55,000 Bq per kg of I-131 in samples of Spinach taken in the Ibaraki Prefecture. These high values are significantly above Japanese limits for restricting food consumption (i.e. 2,000 Bq/kg)."

- 21 March 2011 Fukushima Nuclear Accident Update by IAEA [conversion factor: 1 pCi = 0.037 Bq, therefore 1 Bq = 27 pCi].
On 21 March 2011, World Nuclear News reported: "In the town of Kawamata, three milk samples showed above 300 becquerels per kilogram [1 kg of milk ~ 1 litre] in iodine."



Above: in 1962, Salt Lake City in Utah was downwind from Nevada test site nuclear explosions Sedan (6 July, 104 kilotons, optimal depth for cratering, 12,000 ft high cloud), and the near-surface bursts Johnny Boy (11 July, 0.5 kilotons, 11,000 ft high cloud) and Small Boy (14 July, 1.65 kilotons, 15,000 ft high cloud).

Cows eat iodine-131 that lands on grass, and 3% of the ingested iodine is passed on to their milk. When milk is consumed, 25% of its iodine enters your thyroid gland. A single ingestion of 74,000 pCi (2,700 Bq) of I-131 by a child with a small (2 gram) thyroid gives a thyroid dose of 1 cGy or 1 rad. The same 1 cGy dose is delivered to the 2 gram child's thyroid by consuming milk from an explosion (reactor or bomb) whose iodine-131 content peaks at 5,600 pCi/litre or 210 Bq/litre (source: UCRL-7716). As the graph above shows, the peak iodine-131 content of Salt Lake City milk was about 2,000 pCi/litre (74 Bq/litre) on 25 July 1962. Cresson H. Kearny pointed out that the maximum measured radioactive contamination of milk in the United States by iodine-131 from the 1986 Chernobyl nuclear disaster was just 560 pCi/liter of milk (produced by cows grazing on pasture in Washington), compared to 900 pCi/litre of iodine-131 in milk at Oak Ridge, Tennessee, after the 300-kiloton Chinese nuclear test explosion of December 28, 1968. With its 8 days half-life, the iodine-131 doesn't last very long. Countermeasures are discussed below, after reactor safety facts.



Above: Fukushima Daiichi reactor designs from 1967-71, showing the deliberately frangible roof which blew off just as designed in the hydrogen gas explosions at Units 1 (12 March) and 3 (14 March), preventing any damage to the reactor cores after gas venting. The two-hour fire at the spent fuel storage pond beside Unit 4 on 15 March was not caused by the nuclear fuel, according to a statement by Japanese Prime Minister Naoto Kan’s spokesperson Noriyuki Shikata, and the storage pond was quickly and safely refilled with water, simply using a bucket carried under a Chinook helicopter:



 Unit 1: Roof blown off by hydrogen gas explosion on 12 March
 Unit 2: Torus under reactor exploded on 15 March
 Unit 3: Roof blown off by hydrogen gas explosion on 14 March
 Unit 4: Two-hour fire at the spent fuel storage pond on 15 March and another fire on 16 March

The faulty pressure relief valve and subsequent loss of coolant at Fukushima Daiichi Unit 2 led to a containment pressure increase to 700 kPa on 14 March, followed by the explosive rupture of the large torus steam-suppression chamber below the reactor pressure vessel on 15 March: “The pressure in the pool was seen to decrease from three atmospheres to one atmosphere after the noise, suggesting possible damage. Radiation levels on the edge of the plant compound briefly spiked at 8217 microsieverts per hour but later fell to about a third that. ... Japanese authorities told the International Atomic Energy Agency (IAEA) that radiation levels at the plant site between units 3 and 4 reached a peak of some 400 millisieverts per hour ... Later readings were 11.9 millisieverts per hour, followed six hours later by 0.6 millisieverts ...”

This fast drop in radiation levels proves that the sources of the radiation are fast-decaying and dispersing gas and vapour (not a core explosion), and is not slow-decaying solid fuel particle deposits! The Unit 3 explosion on 14 March resulted in a peak site dose rate of 3.13 mSv/hour (~313 mR/hour) at 9:37AM (JST), falling rapidly to just 0.326 mSv/hour (~32.6 mR/hour) at 10:35 AM, less than an hour later, and to just 0.231 mSv/hour (~23.1 mR/hour) at 2:30PM, five hours after the initial peak. In addition, there was a fire on 15 March at the spent fuel cooling pond adjacent to Unit 4, but Japanese Prime Minister Naoto Kan’s spokesperson Noriyuki Shikata said that “we have found out the fuel is not causing the fire.”

Contrary to the BBC and other scare-mongering media, there has been no fuel meltdown, just a failure of the tops of the zirconium alloy “zircaloy” capsules at 1200 °C when the coolant water level fell below the tops of the fuel rods. This caused the zirconium to be oxidized by steam, releasing the hydrogen from the water, increasing the pressure and then exploding after being vented into the outer building as designed (diagram above). The uranium oxide ceramic fuel itself doesn’t melt below 2800 °C. The control rods were inserted to stop fission when the earthquake occurred on 11 March. Since then, the only heat source in the reactor has been the radioactive “decay heat” from fission products, not energy released by nuclear fission! The reactor cores have been under control and safely contained since 11 March.





Above: Japanese Prime Minister Naoto Kan’s spokesperson Noriyuki Shikata said that “we have found out the fuel is not causing the fire.” Helicopters were reportedly used to refil spent fuel storage pools, to cool the reactors and fight the fire (by dropping water on it from a safe height), to avoid danger to fire-fighters. The anti-nuclear biased BBC is having a field day, reporting the fire without mentioning that it is not a nuclear fuel fire, and presenting the event as if it is a repeat of the Chernobyl accident of 1986, when a dangerous RBMK reactor without a steel core protection went supercritical blew up after having the control rods completely removed while the safety system was turned off. In all the Japanese reactors affected, the control rods were fully inserted and nuclear fission stopped at the time of the earthquake on 11 March!

Safety of nuclear reactor cores from earthquakes and explosion blasts: the facts

On the 16 March BBC1 One Show, the BBC used the 1957 Windscale nuclear reactor fire in Cumbria to “explain” the dangers of the Japanese reactors. They omitted to mention that Windscale was an air-cooled burnable graphite moderated reactor with no steel containment vessel for the reactor core! The Japanese reactors used water as the coolant, which suppresses fire (unlike air). Also, the chief danger after the Windscale fire wasn’t fission products, but inhalation of polonium-210 which was being made in the reactor for the long-obsolete neutron initiators of old-fashioned nuclear bombs (modern nuclear bombs use miniature particle accelerator “zippers” as neutron sources). There is no polonium-210 in the Japanese reactors, which are used for energy production. Therefore, the situation is entirely different, so for failing with to point out these differences, the BBC was guilty of deliberate deception of its viewers. Modern nuclear reactors are entirely different.

Extensive research was done by the West (but not by the USSR) to ensure the safety of steel reactor cores vessels when a hydrogen gas or other explosion blast wave hits them. This is precisely why the Japanese reactor outer buildings had frangible roofs, designed to safely blow off in an explosion rather than to collapse and cause damage. The fear mongering American anti-nuclear politicians during the Cold War used to assume falsely that an explosion near a nuclear reactor would cause 100% of the core inventory to escape. That didn’t even happen in the explosion at Chernobyl (at full power and with the safety systems turned off), where the burning core was completely exposed to the atmosphere but only 3.2% of the plutonium escaped.

Dr Conrad V. Chester of Oak Ridge National Laboratory discussed the resistance of nuclear reactors to explosive blasts in Jack C. Greene & Daniel J. Strom (Editors, Health Physics Society), Would the Insects Inherit the Earth and Other Subjects of Concern to Those Who Worry About Nuclear War, Pergamon Press, London, 1988, pages 12-13.

Dr Chester there evaluates a 1-GW nuclear reactor (which produces 3 kg of plutonium daily).
The thick-walled reinforced concrete containment buildings resist peak horizontal ground shock (or earthquake) accelerations of 0.25g and much higher peak overpressures than ordinary buildings, typically 60 psi or 410 kPa for moderate damage (cracking but not total failure). The auxiliary diesel generators and control rooms are designed to withstand 25 psi or 170 kPa peak overpressure, but the steel pressure vessel containing the reactor core needs an overpressure impulse of at least 200 psi-seconds or 1.4 MPa-seconds to fracture it and cause a Chernobyl-type release. This is a really massive pressure impulse, which at a distance of just 0.7W2/3 metre from W kilotons of TNT equivalent nuclear explosion (see diagram below). A hydrogen gas explosion in the reactor building above the steel core vessel can't produce such a powerful blast overpressure impulse, because it is not as powerful as a kiloton of TNT.

Even a direct hit by terrorists crashing a commercial aircraft into the outer building would not produce the pressures needed to rupture the steel pressure reactor vessel inside! As Dr Chester explained in 1988, modern nuclear reactors automatically shut themselves down in an accident when electric power is lost (gravity causes the control rods to fall into the core, shutting down fission) and they have efficient heat sinks with natural convection to remove decay heat without needing any external power, unlike the ancient 1967-71 designs in Japan. Dr Chester’s extensive published research includes:

 “Civil Defense Implications of the Pressurized Water Reactor in a Thermonuclear Target Area,” Nuclear Applications and Technology, Vol. 9, 1970, pages 786-95

 “Civil Defense Implications of a LMFBR in a Thermonuclear Target Area,” Nuclear Technology, Vol. 21, 1974, pages 190-200

 “Civil Defense Implications of the U.S. Nuclear Power Industry During a Large Nuclear War in the Year 2000,” Nuclear Technology, Vol. 31, 1976, pages 326-38



Above: the overpressure-impulse from an air burst 1 kiloton or 1,000 tons of TNT equivalent nuclear explosion is only 10 kPa-sec or 1.4 psi-seconds at 100 metres and varies inversely with distance, 200 psi-seconds or 1.4 MPa-seconds of overpressure impulse (which Dr Conrad V. Chester of Oak Ridge National Laboratory calculates is needed to rupture the steel pressure vessel containing a nuclear reactor core) requires the distance between the steel reactor vessel and the 1,000 tons of TNT explosion to be just 0.7 metre (70 cm). This blast overpressure impulse can't arise from a hydrogen gas explosion; there simply isn’t enough energy available! Extensive nuclear test data from the 1950s verify the resistance of steel to withstand nearby nuclear weapon detonations, with important implications for proving the survivability of nuclear reactor pressure vessels in any disaster, including a nuclear war. (Graph credit: John A. Northrop, Handbook of Nuclear Weapon Effects: Calculational Tools Abstracted from DSWA’s Effects Manual One (EM-1), Defense Special Weapons Agency, 1996.)

The ablation tests at the 23 kt Teapot-Met nuclear explosion in the Nevada on 15 April 1955 by J. E. Kester and R. B. Ferguson (Operation Teapot, Project 5.4, Evaluation of Fireball Lethality Using Basic Missile Structures, WT-1134, AD0340137), proved that at just 80 feet only the outer 0.4 inch of steel balls was ablated by the fireball. The steel vessels of nuclear reactors are at least 7.5 inches thick! The error in the popular myth that everything is totally vaporized in an explosion fireball is due to the fact that the cooling rate of the fireball is so great that there is literally not enough time for the heat to penetrate more than a thin surface layer before the temperature drops below the the melting point of steel. Good heat conductors like steel are also protected by surface ablation.

At Chernobyl, 6.7 tons of radioactive debris blew off to great altitudes from a burning exposed reactor core, creating solid fallout particles like a nuclear surface burst explosion, which included the release of 75% of the reactor’s inventory of xenon-135 gas, 20% of the iodine-131 vapour, 15% of the tellurium-132, 12% of the caesium, 5.6% of the barium-140, 4% of the strontium, and 3.2% of Zr-95, plutonium and other refractory (high melting point) nuclides. In the Japanese reactor explosions, only a tiny quantity of the gas xenon and a trace of iodine vapour have escaped with the vented steam released from the steel reactor vessel. All of the debris seen in the explosions is non-radioactive frangible roof debris!







Above: we have long known all the facts about nuclear reactor explosion fallout because a Kiwi nuclear reactor was deliberately blown up at the Nevada test site by Los Alamos and the U.S. Naval Radiological Defense Laboratory on 12 January 1965, to determine the nature of reactor explosion fallout. It detonated with a nuclear explosion yield equivalent to 2.1 tons of TNT, and reached a maximum temperature of 4,250 K, which vaporised 5% of the reactor core fuel rods, of which 68% was dispersed as fallout with a specific activity of 1015 fissions/gram for refractory nuclides like Zr-95 (data: J. R. Lai and E. C. Freiling, Correlation of Radionuclide Fractionation in Debris from a Transient Nuclear Test, pages 337-51 of Radionuclides in the Environment, American Chemical Society, 1970). As that experiment’s projector officer, Dr Edward C. Freiling, observed in the 1970 book Radionuclides in the Environment, the physics of fallout fractionation in a nuclear reactor explosion is explained by Dr Carl F. Miller’s 1963 Fallout and Radiological Countermeasures calculations of fission product condensation. Most of the gases like xenon-135 and vapours with low boiling points like iodine-131 remain in the air and are quickly dispersed over large distances as a gas or on very small particles, unlike solids like plutonium which concentrate in large particles and don’t travel so far. In the Japanese reactors, only small amounts of gases were vented from the cores with steam, unlike Chernobyl. But the BBC doesn’t care:

”IT'S CHERNOBYL ALL OVER AGAIN!
“>> Tuesday, March 15, 2011

“The BBC have gone nuclear over...erm, the nuclear problems at Fukushima. Today has been busy constructing an agenda that the Japanese Government ‘lies’ (according to Roger Harradin) and is ‘blasé’ (according to James Naughtie) about nuclear problems. Undoubtedly the crisis at Fukushima has gotten worse and that is fair comment but the BBC seems determined to extend this into some sort of general attack on nuclear energy. I have to say that one's natural sympathy with the Japanese victims of the tsunami is now being eclipsed by anger about the BBC's overt manipulation of the Nuclear power plant issue. Not reporting - editorialising and always following a clear agenda.

Posted by David Vance”
- http://biased-bbc.blogspot.com/2011/03/its-chernobyl-all-over-again.html

The Fukushima plant was only designed to withstand an 8.2 magnitude earthquake, but it withstood 9.0 earthquake and then complete inundation by a 25-foot tsunami. It is an old design which omits natural convection cooling for decay heat, which is a feature of modern reactor designs. So far over 10,000 people have been killed in the earthquake and tsunami, and only one person has been killed at the nuclear power station (the crane operator, when the crane collapsed). Compare that to the 290,000 annual deaths from carcinogenic soot pollution, or to the 73 deaths so far due to “safe” installed wind turbines (neglecting deaths during transport of parts, etc.), where 201 blade failures, 154 turbine fires, 108 structural failures, and 82 cases of environmental damage have occurred: “Pieces of blade are documented as travelling up to 1300 meters. In Germany, blade pieces have gone through the roofs and walls of nearby buildings.” (Source: Caithness Windfarm Information Forum, 31 December 2010. This data ignores all the terrifying long-term effects of wind warms, e.g. the risks to people who lose limbs or suffer brain damage when blades hit them on the head. Why take the risk of windfarms, when the casualty rates from a rival BBC-hated power source are so trivial by comparison?)

The Sendai earthquake moment magnitude of 9.0 is equivalent to a total energy of about 31,800 megatons of TNT, of which 1.5% or 477 megatons of TNT equivalent appeared as explosion-equivalent “surface waves” composed of ground shock motion and tsunami water waves, similar to the effects from ground-coupled shock energy in a surface burst explosion. The peak ground acceleration from surface waves is roughly 0.00014 E3/4/D2 g’s, where E is megatons of yield and D is distance from epicenter in km (source: formula is from Robert U. Ayres's Hudson Institute report HI-518-RR, AD632279, 1965, Appendix E, page E-3).



Above: the explosion on 12 March 2011 of the outer concrete containment building of Japan's Fukushima Dai-ichi nuclear reactor Number 1, the oldest of 11 reactors on the shoreline immediately exposed to the 9.0 moment magnitude earthquake and 7 metre tsunami which inundated emergency generators required for cooling the radioactive decay heat after shutdown. The reactor was designed in 1971 and is 170 miles northeast of Tokyo. Unlike the latest designs of nuclear reactors, this old reactor design does not allow natural convection cooling to dissipate the radioactive decay heat after shutdown, but requires a powered pump to keep the coolant flowing.



Above: Tuesday 15 March 2011 update on effects from Japanese Fukushima Dai-ichi nuclear reactor explosions. Note that all of the explosions were caused by water overheating and being reduced by heat into hydrogen and oxygen (which was then vented for safety into the outer buildings or external pipes, where it exploded safely with no reactor core vessel damage, and minimal radioactive xenon-135 gas contamination). Note that the overheating was not caused by nuclear fission but was due to radioactive decay heat (which decays quickly with time) after 10 nuclear reactors were shutdown following the earthquake. This issue was unique to those old reactor designs, because the Japanese 1967-1971 design required externally-powered coolant circulation after shutdown, unlike modern reactors which can dissipate the decay heat energy by natural convective cooling without any need for externally powered coolant pumps that are vulnerable to tsunami inundation!

Fukushima Daiichi Unit 1 suffered a hydrogen gas explosion in the outer concrete building after venting of gas from the 15 cm thick stainless steel-protected reactor core vessel on 12 March, and Unit 3 underwent a similar hydrogen gas explosion on 14 March. Both explosions blew the roofs of the buildings, but the blasts did not damage the steel reactor core vessels. When the hydrogen gas from the cores was vented, some radioactive vapours like iodine-135 and particularly its more volatile “permanent gas” decay product xenon-135, escaped into the outer building from overheated zirconium fuel capsules (but no refractory nuclides like plutonium, which are solids, not gases). Iodine-135 vapour has a half-life of just 7 hours, and the gas xenon-135 has a half-life of just 9.2 hours, so even neglecting the fall in concentration due to atmospheric dispersion, less than 3% remains 48 hours later. Iodine-131 is also emitted, but gives lower initial dose rates due to its greater distance from the high mass number peak in the M-shaped fission product abundance distribution curve, and lower specific activity. This is because the longer the half-life, the longer the time taken between decays of radioactive atoms, so the lower the dose rate.

Ryohei Shiomi, official of the Nuclear and Industrial Agency, Japan, stated: “Units 1 and 3 are at least somewhat stabilised for the time being. Unit 2 now requires all our effort and attention.” Japan has evacuated 200,000 people from a 20 km (12-mile) radius exclusion zone around Fukushima Daiichi, and has warned people up to 30 km (19-miles) away to stay indoors and to wear a wet cloth over the face to absorb iodine vapour if they venture outdoors. They have 230,000 units of stable potassium iodide tablets read for dispersion in the highly unlikely event that a significant iodine-131 release occurs, to block iodine-131 uptake by the thyroid gland and therefore prevent the thyroid cancers that were reported after the Chernobyl accident in 1986.

In depth

The decay heat in the steel pressure vessel converted the stagnant coolant water into high pressure steam and a mixture of hydrogen and oxygen gas, which had to be vented into the concrete containment building surrounding the 15 cm thick steel reactor core vessel. The hydrogen and steam mixture in the outer building then exploded, blowing the concrete roof off and allegedly venting a small quantity of gaseous fission products, mainly xenon-135, from damaged fuel rods (the zirconium casings of fuel rods melt at 1200 C), which had been released into the outer building along with the steam and hydrogen gas during the pressure venting. The strong steel reactor core remained intact.

After the 12 March Unit 1 explosion, the radioactivity level inside the concrete containment building reached 100 microSieverts per hour or 10 mR/hour (1000 times natural background radioactivity). Outside the building, it reached 0.8 microSieverts per hour or 0.08 mR/hr (8 times background). Chief Cabinet Secretary Yukio Edano said radiation around the plant had fallen after the time of the blast, confirming that the steel fuel vessel is intact. Sea water is now being used to cool down the reactor core, which is also the case at reactor number 3.



The episode proves the safety of nuclear power in the worst case scenario of an 9.0 moment magnitude earthquake followed by a massive tsunami which destroyed backup power for coolant circulation, was safe for the 11 reactors and caused only a minor venting of short-lived volatile nuclides with a maximum radiation level outside the oldest reactor buildings of 8 times background, which rapidly decayed and dispersed. The volatile radioactive fission product gas xenon-135 has a 9.2 hours half life, so under 3% remains after 48 hours or 5.2 half lives, and in addition it is quickly dispersed and diluted to safe levels. The initial maximum dose rate outside the building of 8 times normal background would be down to just 0.2 times background just assuming xenon-135 decay, and neglecting the dispersion effect. This extreme proof test of the safety of nuclear power even under an immense earthquake and tsunami, is one piece of good news to come from the natural devastation scene in Japan. Lucky they didn’t have a wind farm in the devastated area, or the falling turbine blades would have caused a real additional danger, while solar cells would have been swept along as hazardous debris in the tsunami!



Above: it's mainly gaseous xenon-135 with a half life of 9.2 hours that's escaped, and less than 3% of that remains after 48 hours. Also, it mixes rapidly with the air and gets diluted, too. Notice that background is 0.01 mR/hour so the peak dose rate of 1000 times higher is 10 mR/hour. If this decays with 9.2 hours half life, the total dose with exponential decay is simply the initial peak dose rate times the mean life (for exponential decay the mean life is 1/ln 2 or 1.44 half lives). Hence total dose = 10 x 1.44 x 9.2 = 132 milliRoentgens. This is about the dose you get naturally over a year. No significant plutonium will have vented, because it's a solid, not a gas. You only need to take potassium iodine tablets if the total predicted dose exceeds 20 Roentgens to the thyroid. The amount of iodine-131 released will be far less than the xenon-135 because it's mass number is further from the peak on the M-shaped fission product abundance curve (scroll down for this fission product abundance curve), it has a higher boiling point than gases like xenon, and its longer half life (8 days) reduces its specific activity per atom. Other factors being similar, specific activity for a given number of radioactive atoms is inversely proportional to the half life. The longer the half life, the lower the dose rate because the same amount of radiation is given out more slowly. If it's longer than a human life span, then you are saved all the radiation that is not released during your life! If they had wind turbines or solar cells in Japan with the same power output, they would have caused a serious hazard from flying panel debris, wind blades, etc., during the earthquake and tsunami.

Since the 1950s, it has been known that iodine-131 exposure is one of the easiest threats of fallout to take countermeasures against:

1. Don't drink fresh milk for a few weeks if the cattle are eating pasture grass contaminated with fresh fallout. (You can still use the milk to make cheese to be eaten after the 8-day half life of the iodine-131 has ensured its decay to insignificance.)

2. Or, continue using the milk so long as you can put the cattle indoors on winter feed until the iodine-131 (which has a mere 8 days half-life) has decayed.

3. A third option - which is not sensible unless the thyroid dose is expected to exceed 25 R - is to administer 130-milligram potassium iodide tablets to everyone daily who is drinking contaminated milk within a month of detonation; this blocks iodine-131 uptake by the thyroid by saturating it. But the evidence is that the risk of getting iodine-131 induced thyroid cancer from long-range fallout is so low that, in general, the low-risk of side effects from potassium iodide are similar or greater than those for radiation. The U.S. Federal Drugs Administration evaluated the risks of administering potassium iodide for thyroid blocking under emergency conditions:

'FDA guidance states that risks of side effects, such as allergic reactions, from the short-term use of relatively low doses of potassium iodide for thyroid blocking in a radiation emergency, are outweighed by the risks of radioiodine-induced thyroid nodules or cancer, if the projected dose to the thyroid gland is 25 rems or greater.'




Above: the importance of the thyroid gland in concentrating iodine isotopes after inhalation by mice of 2-day old uranium fission product debris, in laboratory experiments done by Dr Stanton H. Cohn of the U.S. Naval Radiological Defense Laboratory, who was a member of the team who decontaminated the Marshallese islanders subjected to heavy fallout at Rongelap Atoll after the 1 March 1954 nuclear weapon test. In 1955, 1956 and 1959, he returned and determined the continuing nuclide contamination in plants and animals in the Marshall Islands, and he measured human body burdens with a whole body scintillation counter. Reference: U.S. Congressional hearings on Biological and Environmental Effects of Nuclear War, June 1959, page 482. Observing a dose threshold for the effects of plutonium dioxide inhalation on mice, at page 488 Dr Cohn states: “The smallest dose to the lung which produced malignant tumours in mice was reported as 115 rad [cGy], following administration of 0.003 microCuries Pu239O2, and 300 rads [cGy] after administration of 0.15 microCurie Ru106O2.” So the main problem is iodine, not plutonium!

It’s worth adding that plutonium in soil is strongly discriminated against by land plants and animals. If you have 100 Bq/gram of plutonium measured in dried topsoil samples, the plutonium uptake in plants ranges from a maximum of 0.016 Bq/gram in dried broccoli (concentration factor 1.6 x 10-4) down to just 0.0029 Bq/gram in dried corn (concentration factor 2.9 x 10-5). So the plant discrimination against plutonium gives a protection factor of 6,300-34,000. In addition, when you eat plants containing plutonium, only 1 part in 10,000 is taken up from the gut and the rest is eliminated. So the combined plant and human discrimination against plutonium therefore means the plutonium concentration in your body (Bq/gram) is between 63,000,000 to 340,000,000 times less than that in the soil!

The soil is naturally radioactive with alpha emitters anyway: earth's crust is composed of 4 parts per million uranium-238, and 12 parts per million thorium-232. Ingested uranium is more hazardous as a chemical heavy-metal poison to the kidneys than due to its radiation, since 2 micrograms uranium per gram of kidney is chemically toxic (the LD50). Americium-241 in household ionization smoke detectors (0.9 microcurie per smoke detector) emits 5.6 MeV alpha particles, compared to just 5.2 MeV alpha particles from plutonium-239. So household smoke detectors contain something emitting "deadlier" higher-energy alpha radiation than plutonium-239! In fact, it is very easy to stop alpha radiation, since it can't penetrate unbroken skin. Inhaled particles are removed from the lungs unless they are just the right size (1-5 microns) to get into the alveoli. Even then, they must be insoluble if they are to remain there for any time, or else they will be dissolved and eliminated from the body naturally.

Plutonium can be concentrated in the marine ecosystem, but not by a large enough factor to overcome the dispersion in the oceans and the insolubility of plutonium, and in fish it concentrates in the inedible parts not the muscle. This was proved in 1973 by a major radiological survey of Eniwetok Atoll, where 43 nuclear weapons were tested in the atmosphere, with a total yield of 30 megatons of TNT equivalent. Measurements of plutonium in over 800 fish from Eniwetok Lagoon proved that a fish diet for 30 years will produce a human liver and bone radiation dose of just 0.1 mSv from plutonium, insignificant compared to over 1 mSv/year from natural radiation!

Iodine-131 reached 2,050 pCi/litre in milk in Salt Lake City, Utah, on 25 July 1962, following surface burst nuclear weapon tests in the Nevada desert, and on 1 August 1962, the Utah State Health Department recommended (too late!) feeding dairy cattle stored winter feed indoors, to prevent them from ingesting fresh fallout deposited in grass. The total average iodine-131 intake for people consuming 1 litre of milk a day in Salt Lake City was 31,240 pCi in 1962. Page 38 of the article “Fallout and Countermeasures” by L. D. Hamilton in the September 1963 issue of the Bulletin of Atomic Scientists which reported these data, argued that because of the short 8 days half-life of iodine-131, milk should be used for making dried milk powder and cheese, or simply frozen for a few weeks for the radiation to decay:

“In any event, milk collected during a period of high activity could be safely used for processed milk products and need not be thrown away. ... An alternative but more expensive procedure would be to feed cows stored fodder until the iodine-131 activity in the pasturage declined to safe levels.”


What about caesium-137 (30 years half-life) and strontium-90 (29 years half life)?

First, caesium-137 does not have a 30 years half-life when inside people, instead half is eliminated from humans after only about 70 days! Caesium is chemically similar to potassium, and so is eliminated naturally from the body quite quickly, instead of being concentrated in the body and building up in a cumulative manner.

As proved recently at Bikini Atoll, adding potassium fertiliser to lime rich soil (like coral sand, basically calcium carbonate) effectively blocks most of the uptake of caesium-137. Adding potassium chloride to the coral soil of Bikini Atoll, scene of 23 atmospheric nuclear weapons tests, totalling 77 megatons of TNT equivalent in the 1950s, reduced the Caesium-137 in coconuts by a factor of 20 from 3,700 Bq/kg to just 185 Bq/kg. Extensive detailed research on such brilliant “Fallout and Radiological Countermeasures” was instituted at the U.S. Naval Radiological Defense Laboratory in the 1950s by Dr Carl F. Miller, including simple, quick, cheap and highly efficient decontamination procedures for cities and agricultural areas!


In the 1986 Chernobyl nuclear accident, caesium-137 was the major long-lived contaminant but the concentration decreased rapidly with downwind distance, the deposition of Cs-137 being 250,000/D1.67 GBq/km2 at D kilometres downwind (for D beyond 10 km) (source: A. Aarkrog, "The radiological impact of Chernobyl compared with that from nuclear weapons fallout", Journal of Environmental Radioactivity, vol. 6, 1988, pp. 151-62).

Strontium-90 was hyped widely as a danger in the 1950s, on the false assumption that unlike short-lived iodine-131 and quickly-eliminated caesium-137, it would build up in the bones like calcium. However, as Dr Teller and Latter explained to the public in their 1958 book Our Nuclear Future, early fears of long-lived strontium-90 from nuclear fission poisoning all life were soon debunked, because the human body discriminates against strontium-90 uptake in favour of calcium. It had been believed by scare-mongering anti-nuclear chemist like Linus “failed-to-decipher-the-structure-of-DNA” Pauling that strontium would be taken up like calcium, because both are in the same group (column) of the Periodic Table. Big mistake! The discrimination is as follows: 1 unit of soluble (biologically available) Sr-90 per kg of calcium in the top soil becomes 0.7 units of Sr-90 per kg of calcium in plants, which becomes 0.1 unit of Sr-90 per kg of calcium in the soil in milk, and finally you get just 0.07 units of Sr-90 per kg of soil calcium taken up into humans. At each step, the concentration of Sr-90 relative to natural calcium falls, so that it is 14 times less in a human than in the soil. So the Periodic Table can be very misleading!

Robert U. Ayres summarises the results of extensive Sr-90 fallout research from the 1945-62 atmospheric nuclear weapons tests, in his Hudson Institute report Environmental Effects of Nuclear Weapons, HI-518-RR, AD632279 (1965), page 1-44. There are two ways that deposited Sr-90 gets into plants and animals. First, there is direct contamination of foliage by fresh particles of contamination. Most of this can be washed off, or removed by peeling off the outer layers of lettuce or cabbage, the pods of peas, or the outer husks of grain crops like wheat. There is no direct contamination of root crops. Ayres also shows in table 1-7 that on average milling of cereals reduced the direct-contamination Sr-90 dose from consuming cereal to only 35% of that if unmilled cereal is consumed. Secondly, there is the very small chemical root uptake of soluble Sr-90 that is washed into the soil. Because of discrimination against Sr-90 and in favour of calcium at every step of the food chain, the root uptake doses from Sr-90 were negligible in comparison to direct foliage contamination Sr-90 doses after nuclear weapons tests in the 1950s and 1960s.

Ayres reports in table 1-7 on page 1-44 that 1 mCi of soluble Sr-90 deposited per square statute mile produced a final root-uptake equilibrium peak of 4.1 Becquerels (Bq) of Sr-90 per kg of human bone calcium, while for either green vegetables or root crops the same deposition produced 15 Bq/kg of human bone calcium, and for cereals it was 7.4 Bq/kg. The root uptake of Sr-90 from soil was reduced in areas with low soil calcium by simply adding lime to the soil; the calcium crowded out much of the Sr-90, which is already discriminated against by plants and animals. Deep ploughing put the contaminated to-soil below the average depth of the root crops cutting the Sr-90 uptake quickly, although the success of this approach depends upon the water table and the soil cohesion. Growing crops with low calcium content reduces the uptake of Sr-90 (potatoes contain only 1 mg of calcium per 10 calories). (Ayres reports his figures in “SU”, the old politically-incorrect 1953 RAND Corporation “sunshine unit” or “strontium unit”, defined as “1 micro-micro-Curie of Sr-90 per gram of bone calcium”: we have converted to Becquerels per kg for simplicity. 1 Curie = 1 Ci = 3.7 x 1010 Becquerels.)


The mainstream groupthink deception over radiation hormesis

Robin Clarke ignorantly asserts on page 9 of his 1975 book Notes for the Future: "the lethal 'side-effects' of radiation from a nuclear reactor are not so different from those of the bomb itself, except in scale."

So let's explore the history of censorship of the dependence of dose rate on the effects of radiation, called hormesis. The effects at Hiroshima and Rongelap were due to extremely high doses received at extremely high dose rates which prevented DNA repair enzymes to repair the double stand breaks as they occurred (which occurs at lower dose rates), while Chernobyl's widespread effects were provably due to radiophobia - the simply false reporting of natural cancer and natural genetic effects as due to radiation, often "justified" by non-existent, inappropriate, or poorly-diagnosed "unexposed control groups". If you diagnose 100% of the natural cancer in an irradiated group but only 50% of the cancer in a "control group", then you will claim that the risk of cancer in the irradiated group is double that in the unexposed "control group", when it's simply a difference in diagnosis rates due to the radiophobia-induced hypochondria. The irradiated, scare-mongered group will be more likely to report any possible cancer symptoms than the unirradiated group.







Above: low dose rates of radiation stimulate growth in mice, evidence of radiation hormesis (from Dr T. D. Luckey, Radiation Hormesis Overview, lecture given at ICONE-7, Tokyo, April, 1999). We discussed the source of errors in the mainstream linear, no-threshold extrapolations from Hiroshima and Nagasaki data in previous posts: they apply to very high dose rates (initial nuclear radiation received over a period of seconds), and are extrapolated downwards using the linear law of genetic effects in non-mammalian, short-lived insects (Muller's fruit flies), and also plants like maize. Insects and plants like maize don't live for decades before reproduction, so they don't acquire significant doses of natural background radiation, and they don't need to evolve DNA repair enzymes (unlike mammals which produce relatively few offspring after a period measures in decades). So the present radiation dose standards are based on a false radiation effects model from insects and plants (dating back to anti-nuclear bias by Edward Lewis in 1957 Congressional Hearings on fallout, as documented in detail in previous posts), which must be revised to take account of mammalian DNA repair enzyme (e.g., protein P53) stimulation as a form of cancer prevention at low dose rates. This stimulation is akin to overcompensation by muscles to regular exercise: an increased rate of DNA breakage leads the body to devote more metabolism to DNA repair enzymes like protein P53, which overcompensates. You reduce the cancer risk by devoting additional energy to DNA repair enzymes than is normally used in that manner, an analogy to reducing a fire risk by spending more money on fire sprinkler systems or fire resistant materials, as Dr Jeffrey Moss explains in the video about hormesis below:



Above: hormesis is dose rate dependent, not just dose dependent! Radiation or chemical induced double strand DNA breaks occurring at a rate faster than they can be repaired by DNA repair enzymes in cell nuclei (such as protein P53) results in a net increase in cancer risk, while lower dose rates can stimulate the whole DNA repair enzyme system to repair breaks more efficiently than they do naturally. This is seen clearly in skin cancer, from high dose rates of ultraviolet radiation. See also the posts here and here.




Above: the loss of naivety in dose-response relationships. The optimum curve, effect probability = e-bA - e-cA, represents the stimulation of the DNA repair enzyme system by radiation dose rate A. At high dose rates, the DNA repair enzyme system is itself damaged by and unable to function efficiently, but at lower dose rates it is stimulated by radiation into working faster. However, historically the discovery of DNA repair enzymes only date from the 1970s, and data on non-linear radiation effects from earlier periods was ruthlessly censored (mainly by anti-nuclear fallout political propaganda and scare-mongering) in deference the simplest idea, the linear dose-effecs law, where effects are supposedly directly proportional to causes. However, you soon learn in most medicines that increasing the dose of a vitamin or other drug doesn't actually improve the effect without limit. Either a saturation point is produced, beyond which subsequent doses are simply wasted, or - worse - an overdose produces smaller benefits than lower doses! E.g., if the overdose side-effects of a massive dose from aspirin kill most people by internal bleeding, then the overall beneficial effect of increasing the dose drops when the optimum dose rate is exceeded, instead of either increasing or remaining constant! Although the mathematical theory of natural exponents which produce the realistic dose-effects curves have been known for a long time (the constants are easy to fix from the linear law for very small doses, and from experimental data on large doses), there is an Orwellian "doublethink" or "crimestop" brainwashing system in place in groupthink science dogma hype, which prefers to endlessly promote false linear laws. To get the facts to "fit" such false laws, the data is fiddled by the simple process of natural selection: disregarding as "suspect" any data that doesn't conform to the mainstream reigning science dogma and bias! Exactly the same mechanism led to a gradual evolution of experimental measurements of fundamental constants like the electronic charge: the first investigators made errors but were revered. Subsequent investigators were awed by the first investigators, and feared any data which diverged too far from it. So they deleted as "suspect" most of the correct data, being biased in favour of incorrect figures that confirmed the mainstream prejudice! Only in gradual steps, paper after paper, did the consensus shift towards the correct values.

More pseudo-"environmentalism" Dr Goebbels-type green pension fund-boosting scare lies, this time debunked by Professor Richard Muller (hat tip to Dr Lubos Motl):



Atomkraft? Nein Danke! ... Nuclear Power? No Thanks! ... saying ‘no to nuclear’ has never been about reasoned argument. It’s about gut politics, primitive superstition and scientific ignorance, as in: nuclear power is associated with atom bombs and Hiroshima and Cold War terror, and it’s made by scary scientists ... after the atom bombing of Hiroshima and Nagasaki, thousands died in the immediate blast, and thousands more as a result of burns afterwards. But ... studies found, their life expectancy had actually increased.”

- James Delingpole, How to be right, Headline, London, 2007, pages 113-4.




Above: Roddy Campbell’s post begins with the photo above of the mutant horse found near Chernobyl: “This mutant pony - pictured near Chernobyl - has 11 bodies, 11 heads and no fewer than 44 legs.” It continues:

“About 4000 cases of thyroid cancer, mainly in children and adolescents at the time of the accident, have resulted from the accident’s contamination and at least nine children died of thyroid cancer; however the survival rate among such cancer victims, judging from experience in Belarus, has been almost 99%.”

- Roddy Campbell, “Nuclear power – some perspective”, guest post on James Delingpole’s Telegraph blog, 14 March 2011.


But 1% of 4,000 equals 40 deaths downwind (off-site) from Chernobyl. For iodine-131 (half life 8 days) there are simple antidotes like not drinking contaminated food and water, or taking potassium iodide or iodate tablets (130 mg per day). These flood the thyroid gland with stable iodine, preventing update of 99% of the iodine-131. The death figure then goes down to 1% of 40, i.e. no expected casualties.

But the data quoted is wrong. The rise in thyroid cancers observed are subjective to diagnosis, and doubts have been expressed even over 40 deaths at Chernobyl, by Dr Zbigniew Jaworowski, "Radiation Risk and Ethics: Health Hazards, Prevention Costs, and Radiophobia", Physics Today, April 2000, pp. 89-90:

"... it is important to note that, given the effects of a few seconds of irradiation at Hiroshima and Nagasaki in 1945, a threshold near 200 mSv may be expected for leukemia and some solid tumors. For a protracted lifetime natural exposure, a threshold may be set at a level of several thousand millisieverts for malignancies, of 10 grays for radium-226 in bones, and probably about 1.5-2.0 Gy for lung cancer after x-ray and gamma irradiation. The hormetic effects, such as a decreased cancer incidence at low doses and increased longevity, may be used as a guide for estimating practical thresholds and for setting standards. ...

"The highest average thyroid doses in children (177 mGy) were accumulated in the Gomel region of Belarus. The highest incidence of thyroid cancer (17.9 cases per 100,000 children) occurred there in 1995, which means that the rate had increased by a factor of about 25 since 1987.

"This rate increase was probably a result of improved screening [not radiation!]. Even then, the incidence rate for occult thyroid cancers was still a thousand times lower than it was for occult thyroid cancers in nonexposed populations (in the US, for example, the rate is 13,000 per 100,000 persons, and in Finland it is 35,600 per 100,000 persons). Thus, given the prospect of improved diagnostics, there is an enormous potential for detecting yet more [fictitious] "excess" thyroid cancers. In a study in the US that was performed during the period of active screening in 1974-79, it was determined that the incidence rate of malignant and other thyroid nodules was greater by 21-fold than it had been in the pre-1974 period."


The normal thyroid "nodule" incidence is 16% in Americans, and 35.6% in the more carefully screened Finland population. A large percentage of people have thyroids that don't conform to the medical textbook. What happens after a nuclear accident is that people go looking for these nodules, feeling people's throats, and detecting more of the natural incidence, then mis-reporting this rise in detection of natural thyroid gland "deformalities" as radiation-induced nodules.
At Rongelap atoll, where people received a really massive thyroid dose of 2,100 R or 21 Gray from drinking water from an open cistern for two days before evacuation 115 miles downwind of the 15 megaton Bravo nuclear test on 1 March 1954, some really did get thyroid cancer (source: Dr Edward T. Lessard, et al., Thyroid Absorbed Dose for People at Rongelap, Utirik, and Sifo on March 1, 1954, BNL-5188). But the highest dose in kids thyroids after Chernobyl was only 177 mGy or 0.177 Gray, over a hundred times lower than the 18 Gray thyroid dose at Rongelap! It seems that all Chernobyl thyroid cancers are claimed to be natural cancers, under the threshold cancer dose, and due to screening!

The same occurred with genetic effects immediately after Hiroshima and Chernobyl. E.g., the BBC and newspapers had an episode after of Chernobyl where they visited clinics filled with special needs children downwind of Chernobyl, and tried to claim that these children were proof of the evil of nuclear power, regardless of the natural incidence. Some clinic directors cooperated, to get funding, which was needed (no problem there!). The problem was the big lie of obfuscating natural incidences of genetic effects, cancer, and thyroid "malformations" with radiation for deliberate anti-nuclear scaremongering.

Because the scientific community were unable to communicate such facts efficiently against pseudo-scientific propaganda, over 100,000 human lives were lost by abortions after Chernobyl: in 1995, environmentalist Michael Allaby stated on pages 191-7 of his book Facing the Future: the Case for Science (Bloomsbury, London):

"The clear aim of the anti-nuclear movement is to silence all opposition ... theirs are now the only voices heard ... In the Gomel district ... which was one of the most heavily contaminated [after the Chernobyl nuclear disaster of 1986], the death rate per thousand newborn babies was 16.3 in 1985, 13.4 in 1986, and 13.1 in 1987; in Kiev region the figures ... were, respectively, 15.5, 12.2, and 12.1."


The International Atomic Energy Authority has reported that over 100,000 excess abortions were performed throughout Western Europe after the Chernobyl accident (reference: L. E. Ketchum, Lessons of Chernobyl: SNM members try to decontaminate world threatened by fallout, Part I [Newsline], J. Nucl. Med., vol. 28, 1987, pp. 413-22). This is the danger from lying. The newspapers and media generally have a vested interest in hyping anti-nuclear lies to make a big "splash" that sells newspapers.

It's all phoney, extrapolating linearly down from effects at massive doses and massive dose rates despite non-linear response rates, or falsely claiming that improved diagnosis rates correlate to effects from radiation. The whole reason why nuclear power is currently expensive is political fear-mongering over lying radiation “risks”, proven by even more obvious groupthink fakery than the photograph of the “44 legged mutant horse” from Chernobyl which Delingpole gives. This pushes up the costs of reprocessing spent fuel, because it has to be done in laboratory-type glove boxes, with staff restricted to tiny doses. E=mc2 tells you that 1 kg converted into energy gives 9 x 1016 Joules of energy. Fission converts 0.1% of uranium-235 into energy, so fissioning 1 kg of uranium-235 produces 9 x 1013 Joules of energy.

Done efficiently with cheap reprocessing and with the surplus neutrons being captured in cheap and abundant uranium-238 to form plutonium-239 (or captured in cheap and abundant thorium-232 to form uranium-233), nuclear power would be the cheapest power on earth. The whole problem is psychological "groupthink" against small doses of radiation, despite the fact we get doses all the time.

The reason why you can't extract dinosaur DNA from a fossil mosquito in amber 65 million years old is that the DNA has been totally broken down by the natural background nuclear radiation dose exceeding 6 million centigray over that period. DNA in living cells has received the same dose while being passed on during all those generations, but because of DNA repair enzymes in mammals (as distinct from natural selection in simple insects like Muller’s notoriously misleading X-rayed fruitfly mutations), the damage has been repaired.

Why there is media ignorance and prejudice on the safety of nuclear radiation and energy

The problems facing public perception of nuclear power begin with the dismal fact that nuclear physics is a notoriously obfuscated empirical science! For example, the theory of quantum chromodynamics is extremely difficult to solve even for relatively simple interactions due to the divergence of the path integral’s perturbative expansion caused by the large running coupling for the strong nuclear interaction. Successively more complex terms in the quantum chromodynamic expansion correspond to Feynman diagrams with ever increasing contributions to the path integral, so they cannot be ignored as is done in quantum electrodynamics (where successive terms have smaller contributions, due to the relatively small value of electromagnetic alpha). The Standard Model of particle physics is too complicated for nuclear physics where you have a large number of nucleons. Therefore, instead of using one theory to calculate everything in nuclear physics, it is built on various empirical models of the nucleus: a liquid droplet for fission, but the gamma ray line spectra from nuclear radioactivity indicates a definite shell structure, analogous to the shells of electrons in atomic physics! The inability to picture the nucleus clearly has not helped the public at large to understand nuclear physics. Mathematics has turned the subject into an unpopular occult priesthood, while bomb plutonium production for deterrence, using nuclear reactors, introduces secrecy and militarism.

There are different models of the nucleus used for different purposes, reminding you of the original confused response of many physicists to Louis de Broglie’s theory of wave-particle duality (de Broglie, and his friend David Bohm, believed in some kind of space-time fabric – don’t call it aether – which oscillates like a wave as a particle travels through it). Einstein’s E = mc2 doesn’t explain nuclear energy, since you can use the same formula for non-nuclear energy, e.g. the “potential energy” of electromagnetic fields (the “binding energy” for chemical reactions) in an ordinary battery is equivalent to a tiny mass increase. When a chemical battery is discharged, the energy it loses causes a tiny fall in mass, exactly as predicted by Einstein’s mass-energy equivalence. Therefore, Einstein’s E = mc2 is not unique to nuclear energy, and is provably an obfuscation when applied to nuclear reactions but not to chemical reactions! Most people who listen to the “Einstein equation explanation” on nuclear energy know they don’t gain any understanding from it, because it explains nothing in a useful way, even ignoring the fact that the equation also applies to chemical energy. So they feel insulted, patronised, and annoyed by this self-indulgent simplistic obfuscation from physicists.



Above: the misleading curve of nuclear binding energy (credit: Dr David Langford, who points out that although beryllium 8 “should” be stable, it “in practice flies apart to give two helium nuclei”). The “binding energy per nucleon”, peaking at 8.7 MeV/nucleon for nickel-60, is the mean amount of energy needed to free a nucleon (neutron or proton) from its nucleus. Since this is always above 1 MeV on the graph above, you would think that no particle with less than 1 MeV could ever possibly cause a nucleus to break up! However, the average binding energy can be very misleading, since the nucleons in the outer shells of the nucleus are less strongly held, and the fields holding them aren’t classical continuously-operating fields, but are particle-mediated, fluctuating fields. Therefore, some nuclei can emit neutrons spontaneously despite the average values of nuclear binding energy shown! In addition, low energy neutrons (below 1 MeV energy) can for odd-mass number heavy elements 233, 235, 237, 239, and 241, but not even masses 232, 234, 236, or 238, induce nuclear reactions like fission. Odd-mass numbers imply incomplete nuclear subshells and therefore higher nuclear instability and reactivity, just as occurs with chemical element atomic (not mass numbers). Even numbers imply fully paired-up particles. The fact that the atomic (proton) number is important for chemistry, not the mass (nucleon) number (vice-versa for nuclear physics), shows that the strong nuclear force determining the nuclear shell structure does not depend on electric charge. This led Heisenberg to argue that each nucleon, whether electrically positive or neutral, has similar nuclear “isospin” charge.

The nucleus is 10,000 times smaller in radius than the entire atom, but contains nucleons having either positive or neutral electric charges, all nearby in that tiny volume. Since the electromagnetic force is an inverse-square law force, in the nucleus it stronger than for electrons by a factor of about (10,000)2 = 100,000,000. Therefore, the electromagnetic forces between protons in the nucleus are on the order of 100 million times stronger than the electromagnetic forces of chemistry which bind orbiting electrons to nuclei. Because of this immense electrostatic repulsion between protons, the nucleus would explode if it were not for the “strong nuclear” attractive force between protons and neutrons, which is due to the exchange of “virtual pions”. The virtual pions are off-shell particles which are created by pair production, which occurs in strong electric fields (exceeding Schwinger’s threshold of 1.3 x 1018 volts/metre, for steady electric fields).

This exchange of virtual pions between nucleons, cause the attraction that prevents nuclei from exploding, and when you fire a neutron into uranium-235 it upsets the balance between electromagnetic repulsion and virtual pion-mediated attraction, and causes the nucleus to fission. The electromagnetic repulsion between protons is continually trying to explode the nucleus, and being thwarted by the nuclear strong force, mediated at long distances by virtual pion exchange. Therefore, nuclear explosions are really caused by electromagnetic repulsive energy between protons overcoming nuclear attractive binding energy forces! The reason why the electromagnetic repulsive force causes nuclei to break up and release so much more energy than is given off in chemical explosions is simply that the nucleus is 10,000 times smaller in radius than the atom, but contains a similar amount of electric charge (the number of protons in the nucleus is equal to the number of orbital electrons, unless the atom is charged), so by Coulomb’s inverse-square law the nuclear electromagnetic repulsive forces are (10,000)2 = 100,000,000 stronger than those involved between orbital electrons and nuclei.

What’s interesting next is the M-shaped distribution curve for the fission product abundance. The liquid drop predicts – wrongly – that two approximately equal droplets will be most likely, but in reality the most likely combination is for one fission product to have a mass considerably larger than the other. This is due to the relative stability of different combinations of nuclear shell-structures:



The unnecessary deaths due to the political radiation hormesis cover up by anti-nuclear lobby

The whole nuclear industry is in limbo on this, they’re mainly conservative and believe the best way to resolve any crisis is to do nothing, and say nothing. The anti-nuclear lobby uses falsified statistics that are complete lies, but they gain ground because hardly anybody defends the facts. One typical ploy is the lying claim that there is no human proof of hormesis or thresholds (ignoring the radium painters and Hiroshima), and that animal data is inadmissible.

There’s plenty of evidence using mice that dose rates a few hundred times natural background stimulate the DNA repair enzymes to use more energy and work faster, not only preventing additional risks, but also actually reducing the natural cancer risk from the natural 15 double strand breaks per cell per day.

More recently, there was a fine piece of mice research by Kazuo Sakai, Iwasaki Kazuo, Toshiyasu Iwasaki, Yuko Hoshi, Takaharu Nomura, Takeshi Oda, Kazuko Fujita, Takeshi Yamada, and Hiroshi Tanooka, International Congress Series (2002) 1236 (Radiation and Homoeostasis): 487–490. They found that a dose rate of 1 mGy/hour (100 mR/hour or 10,000 times natural radiation background) stops cancer, and a further paper by Sakai and collaborators in 2006 gives statistically significant evidence that 0.7 mGy/hour extended the life expectancy of mice by 15% (Sakai has nice colour photos showing the slower aging of the irradiated mice, shown above).

“Today we have a population of 2,383 [radium dial painter] cases for whom we have reliable body content measurements. . . . All 64 bone sarcoma [cancer] cases occurred in the 264 cases with more than 10 Gy [1,000 rads], while no sarcomas appeared in the 2,119 radium cases with less than 10 Gy.”

- Dr Robert Rowland, Director of the Center for Human Radiobiology, Bone Sarcoma in Humans Induced by Radium: A Threshold Response?, Proceedings of the 27th Annual Meeting, European Society for Radiation Biology, Radioprotection colloquies, Vol. 32CI (1997), pp. 331-8.

The higher the dose rate the lower the threshold dose for effects, just as with aspirin. The radium dial painters had their bones irradiated by deposited radium over typically 30 years. Rowland could measure the radium in the bones after they died to determine the dose rate accurately, so this is reliable data (he even exhumed skeletons to get data). His funding was cut off when it became clear that there was a massive threshold dose needed for bone cancer if the dose was spread out. For Hiroshima nuclear bomb data, the dose rate was much higher so threshold dose for cancer was only a few cGy.

There is plenty of data proving that it's the dose rate and not the old 1950s dose that really matters, because DNA repair enzymes like protein P53 are overloaded at high dose rates. Likewise, you take a “dose” of 1,000 aspirins if you spread that dose over 20 years, but you’re killed if you take the same dose all at once. The dose criterion implicitly assumes no repair, so it is clearly wrong.

Muller, who got the Nobel prize for discovering that X-rays mutate fruit flies, argued in May 1957 to the U.S. Congressional hearings on The Nature of Radioactive Fallout and Its Effects on Man that there is no significant dose rate effect or threshold dose using his fruit fly data, plus some maize plant data on genetic effects of radiation from geneticists. However, short-lived fruit flies and seasonal crops don’t have the DNA repair enzymes like P53, which were only discovered about 20 years later!



The DNA double helix (two strands of DNA facing each other in a spiral) in every cell nucleus in the human body suffers 200,000 single strand breaks and 15 double strand breaks every day. What's interesting is only 0.007% of natural breaks are double-strand breaks, while 4% of radiation-induced breaks are double strand breaks. This debunks the groupthink myth that DNA damage is due to natural background radiation. It isn’t! If it were, the ratio of single to double strand breaks would be the same for both natural DNA damage, and radiation-induced DNA damage.

It turns out that the natural damage to DNA is mostly due to thermal instability, i.e. 37 °C body temperature, the mechanism being Brownian motion kinetic energy effects, i.e. water molecule bombardment of DNA molecules, related natural free radicals, etc. The cells have DNA repair proteins to rejoin the broken ends of DNA molecules. Single strand breaks don’t cause much risk, because the double helix as a whole remains unbroken. The one broken strand is easily rejoined by a DNA repair enzyme like P53, and all is well.

The cancer risk occurs with double strand breaks, because then the entire double helix is broken off at that point. If you get two double strand breaks occurring quickly, before a DNA repair protein has time to rejoin correctly them, at a very high radiation dose rate, then the loose broken-free segment of DNA might move, reverse, or be lost, and the wrong ends can be joined by accident (like trying to repair a vase after it is smashed up into lots of similar pieces all at once), causing a mutation that can lead to cancer in some cases. FURTHER READING: SELECTED USEFUL POST REFERENCES Factual evidence versus the consensus of ignorant opinion and propaganda during the 1957 U.S. Congressional Hearings on the Effects of Nuclear Fallout Radiation Effects Research Foundation propaganda deceptions and biases exposed and highlighted for the world to see Hiroshima and Nagasaki propaganda debunked by factual evidence which politicians cover up Civil defense facts from Hiroshima and Nagasaki which the politicians suppress with secrecy laws Disproof of Professor Ernest Sterngrass’s low level radiation scare propaganda (why doesn’t he admit he was wrong?) Herman Kahn’s disproof of the “we’re all going to die from strontium-90” liars in his 1960 book On Thermonuclear War (why didn’t Newman and Piel of the Scientific American admit they were liars, instead of lying about Kahn’s book for inhuman anti-civil defense propaganda purposes?)

WHAT IS NUKEGATE? The Introduction to "Nuclear Weapons Effects Theory" (1990 unpublished book), as updated 2025

R. G. Shreffler and W. S. Bennett, Tactical nuclear warfare , Los Alamos report LA-4467-MS, originally classified SECRET, p8 (linked HERE): ...