Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), which examines in detail the evidence for leukemia induced by radiation as known 45 years ago.
Obviously at that time the human data collected from Hiroshima and Nagasaki was far from complete, for although the peak radiation induced leukemia rate occurred in 1951/52, some 6-7 years after exposure, the latent period was much longer in many cases. In any case, the doses which the survivors received were poorly known. Today, even the shielded doses (the calculation of radiation shielding is vital for those who survived in brick or concrete buildings around ground zero) are known quite well, the standard deviation is +/-30%.
These aren't just theoretical computer calculations. With very sensitive instruments and long counting periods, it is possible to measure neutron induced activity in iron from irradiated buildings in Japan, and assess the neutron exposure, while gamma ray energy stored in ceramics like roof tiles can be released as light by heating them. It's possible to do this in a calibrated way (e.g., you can measure the light emitted on heating and then irradiate the same sample with a known dose of radiation, and repeat the process, so that the comparison calibrates the original radiation dose to the material), and after allowing for background radiation you can find out the gamma ray doses from the bomb.
Because in 1962 there was little useful human data, extensive experiments were made on animals, in particular mice. Hence the title of Dr Loutit's book, Irradiation of Mice and Men.
What caught my eye was the section on pages 61-82 of factors relating to leukemia. On page 61 he states:
"... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls."
So for a fixed dose, 1,000 R spread over a month (which is far less lethal in short term effects than a that dose spread over a few seconds, as occurs with initial radiation in a nuclear explosion, or over a few days when most of the fallout dose is delivered), the leukemia rate can vary from 5-40% as the dose rate varies from 1.3-81 r/hour.
This does illustrate that the effects, even long-term effects, don't depend merely on the dose, but also upon the dose rate. There is a kind of religion that Health Physics is based upon, and has been based upon since about 1956, which states that the long-term effects of radiation are linearly dependent upon the total dose.
The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.
The only modification officially included for other factors are a set of radio-senstivity factors for different organs (those with fast-dividing cells are of course the most sensitive to radiation, since their DNA is highly vulnerable - while dividing - more of the time than other cells), and a quality factor for the type and energy of radiation (the amount of energy deposited in tissue per unit length of the track of a particle is the "linear energy transfer" or LET, and the higher the LET, the more ionisation type disruption a single particle causes to tissue while passing). In general, alpha radiation (helium nuclei, massive and highly charged) inside the body is high-LET radiation and so even a single alpha particle has a cancer risk, regardless of the dose rate, so the the cancer risk is simply proportional to dose as orthodoxy says.
The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.
The only modification officially included for other factors are a set of radio-senstivity factors for different organs (those with fast-dividing cells are of course the most sensitive to radiation, since their DNA is highly vulnerable - while dividing - more of the time than other cells), and a quality factor for the type and energy of radiation (the amount of energy deposited in tissue per unit length of the track of a particle is the "linear energy transfer" or LET, and the higher the LET, the more ionisation type disruption a single particle causes to tissue while passing). In general, alpha radiation (helium nuclei, massive and highly charged) inside the body is high-LET radiation and so even a single alpha particle has a cancer risk, regardless of the dose rate, so the the cancer risk is simply proportional to dose as orthodoxy says.
But for gamma and X-rays, which are low-LET radiation, you the amounts of ionization per unit length of the particle path is very small. Beta particle radiation is intermediate between alpha and gamma, and its effects are very dependent on the exact energy of the beta particles (strontium-90 emits high energy beta particles which can penetrate thin metal foils, for example, while the very low energy beta particles from carbon-14 are stopped by a sheet of paper, like alpha particles). In general gamma and X-ray radiation probably require multiple hits on a cell to overwhelm the DNA repair mechanism, so the dose rate is important.
There is no inclusion in radiation dose effects assessments of the effect of dose rate! But it has been known for about twenty years that protein P53 repairs most breaks in DNA due to radiation if the dose rate is low, but can be saturated and overwhelmed at high dose rates.
Hence, there is a known mechanism by which low dose rates are less likely to cause cancer than high dose rates.
Now the entire controversy on radiation effects hinges on the effects of low-LET gamma and X-rays received at different dose rates, which are ignored in favour of just quoting total doses and trying to correlate those total doses to the statistical effects observed. Why? Well, it is administratively convenient just to record one numerical value - dose. Having to work out both the dose and the dose rate, and to set limits according to some combination, would have been difficult in the past (it probably is not difficult today, because modern electronic dosimeters measure dose rates and doses, and could easily be programmed to record the mean dose rate at which the dose was received).
The data from Hiroshima and Nagasaki, and also for most X-ray and gamma ray medical treatment of people, applies to relatively high dose rates. You would expect this data to show more serious effects than similar doses received at lower dose rates.
There is also some effect of age on exposure. Dr Alice Stewart and associates made the discovery (Brit. Med. J., v1, p1495, 1958, and v1, p452, 1961) after trying to analyse the pre-natal X-ray exposures of all children who died from cancers, particular leukemia, before age 10 in England and Wales between 1953-5, that those who had been X-rayed in utero had an overall risk of dying from cancer before age 10 of 1/600, compared to 1/1,200 for non-exposed children. For children who had been exposed to X-rays while in utero, the peak leukemia and other cancer risk occurred at the age of 3, a shorter latent interval than for survivors of Hiroshima and Nagasaki. (The actual average X-ray dose received to each child in utero was estimated as 1-5 rads on page 82 of Loutit's Irradiation of Mice and Men.)
In Irradiation of Mice and Men, Loutit discusses a serious public relations problem caused by Professor E. B. Lewis, author of Leukemia and Ionizing Radiation, Science 17 May 1957, v125, No. 3255, pp. 965-72. Dr Loutit describes Lewis on page 78 as "... a geneticist of great renown...".
The problem is that Professor Lewis was largely responsible for ignoring dose rate effects. In his 1957 paper Lewis calculated the probability of inducing leukemia per individual rad dose per year, getting 0.000,002 for both radiologists (where doses were somewhat uncertain, due to inaccurate dosimetry until the 1950s) and also for whole body exposure of atomic bomb survivors (a very uncertain figure in 1957, due partly to the fact most cancers had not yet appeared, and partly to the massive uncertainty in the dosimetry for atomic bomb survivors at that time), 0.000,001 for patient groups whose spine was irradiated for ankylosing spondylitis and whose chests were irradiated for thymic enlargement.
Dr Loutit points out that all this data of Lewis' is for high dose rates. The problem is, Loutit writes on page 78 of Irradiation of Mice and Men:
'What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10^{-6} like the others in the table. But the value of 2 x 10^{-6} was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year."
So Professor Lewis has no evidence whatsoever that his data from human beings exposed to high dose rates also applied to low dose rates like background radiation. He merely assumed this was the case, without evidence or explanation or mechanism, and used this assumption to make some fanciful and uncheckable calculations, such as the guess that 10-20% of natural leukemias are caused by background radiation.
On page 79, Dr Loutit challenges all of Professor Lewis' assumptions, pointing out for example that the effect of age at exposure and the effects of dose rate are being totally ignored by Lewis:
"All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis's work as others have done, 'It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation' [Leading article in Science, v125, p963, 1957]. This is just wishful journalese.
"The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different."
This conclusion about the importance of dose rate has been totally ignored, and Lewis' fiddles based only on dose have been continued ever since. No wonder the data from groups exposed to similar doses (but at different dose rates) remain in conflict. It's no mystery. There's nothing unknown about radiation. It's just censorship and officialdom enforcing confusion, as shown by that 1957 editorial in Science, mentioned above. It's straightforward to see that induction of cancer depends to some extent on the saturation of the P53 repair mechanism (for damage to DNA) due to radiation dose rate. This factor is completely ignored in Lewis' linear, no-threshold (LNT) model based on the faulty early data available in 1957.
Notice that the dose rate varied with distance in Hiroshima and Nagasaki, since the duration of exposure was did not vary as rapidly with distance from ground zero as the dose did, so the casualties nearer to ground zero received their doses at higher dose rates.
W.L. Chen,Y.C. Luan, M.C. Shieh, S.T. Chen, H.T. , Kung, K.L. Soong, Y.C.Yeh, T.S. Chou, S.H. Mong, J.T.Wu, C.P. Sun,W.P. Deng, M.F.Wu, and M.L. Shen, ‘Is Chronic Radiation an Effective Prophylaxis Against Cancer?’, published in the Journal of American Physicians and Surgeons, Vol. 9, No. 1, Spring 2004, page 6, available in PDF format here:
‘An extraordinary incident occurred 20 years ago in Taiwan. Recycled steel, accidentally contaminated with cobalt-60 ([low dose rate, low-LET gamma radiation emitter] half-life: 5.3 y), was formed into construction steel for more than 180 buildings, which 10,000 persons occupied for 9 to 20 years. They unknowingly received radiation doses that averaged 0.4 Sv, a collective dose of 4,000 person-Sv. Based on the observed seven cancer deaths, the cancer mortality rate for this population was assessed to be 3.5 per 100,000 person-years. Three children were born with congenital heart malformations, indicating a prevalence rate of 1.5 cases per 1,000 children under age 19.
'The average spontaneous cancer death rate in the general population of Taiwan over these 20 years is 116 persons per 100,000 person-years. Based upon partial official statistics and hospital experience, the prevalence rate of congenital malformation is 23 cases per 1,000 children. Assuming the age and income distributions of these persons are the same as for the general population, it appears that significant beneficial health effects may be associated with this chronic radiation exposure. ...’
The statistics in the paper by Chen and others has been alleged to apply to a younger age group than the general population, affecting the significance of the data, although in other ways the data are more valid than Hiroshima and Nagasaki data extrapolations to low doses. For instance, the radiation cancer scare mongering of survivors of high doses in Hiroshima and Nagasaki would have been prejudiced in the sense of preventing a blind to avoid “anti-placebo” effect, e.g. increased fear, psychological stress and worry about the long term effects of radiation, and associated behaviour. The 1958 book about the Hiroshima and Nagasaki survivors, “Formula for Death”, makes the point that highly irradiated survivors often smoked more, in the belief that they were doomed to die from radiation induced cancer anyway. Therefore, the fear culture of the irradiated survivors would statistically be expected to result in a deviancy from normal behaviour, in some cases increasing the cancer risks above those due purely to radiation exposure.
For up-to-date data and literature discussions on the effects of DNA repair enzymes on preventing cancers from low-dose rate radiation, please see
http://en.wikipedia.org/wiki/Radiation_hormesis
‘What is Science?’ by Richard P. Feynman, presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City, and published in The Physics Teacher, vol. 7, issue 6, 1968, pp. 313-20:
‘... great religions are dissipated by following form without remembering the direct content of the teaching of the great leaders. In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers.
‘We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on, but these do not thereby become established science, established knowledge. They are merely an imitative form of science analogous to the South Sea Islanders’ airfields--radio towers, etc., made out of wood. The islanders expect a great airplane to arrive. They even build wooden airplanes of the same shape as they see in the foreigners' airfields around them, but strangely enough, their wood planes do not fly. The result of this pseudoscientific imitation is to produce experts, which many of you are. ... you teachers, who are really teaching children at the bottom of the heap, can maybe doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.’
Protein P53, discovered only in 1979, is encoded by gene TP53, which occurs on human chromosome 17. P53 also occurs in other mammals including mice, rats and dogs. P53 is one of the proteins which continually repairs breaks in DNA, which easily breaks at body temperature due to free radicals produced naturally in various ways and also as a result of ionisation of caused by radiation hitting water and other molecules in the body. Cancer occurs when several breaks in DNA happen to occur by chance at nearly the same time, giving several loose ends which P53 repairs incorrectly, causing a mutation. This cannot occur when only one break occurs, because only two loose ends are produced, and P53 will reattach them correctly. If low-LET ionising radiation levels are increased to a certain extent, causing more single strand breaks, P53 works faster and is able deal with faster breaks as they occur, so that multiple broken strand ends do not arise. This prevents DNA strands being repaired incorrectly, and prevents cancer - a result of mutation caused by faults in DNA - from arising. Too much radiation of course overloads the P53 repair mechanism, and then it cannot repair breaks as they occur, so multiple breaks begin to appear and loose ends of DNA are wrongly connected by P53, causing an increased cancer risk.
1. DNA-damaging free radicals are equivalent to a source of sparks which is always present naturally.
2. Cancer is equivalent the fire you get if the sparks are allowed to ignite the gasoline, i.e. if the free radicals are allowed to damage DNA without the damage being repaired.
3. Protein P53 is equivalent to a fire suppression system which is constantly damping out the sparks, or repairing the damaged DNA so that cancer doesn't occur.
In this way of thinking, the ‘cause’ of cancer will be down to a failure of a gene like P53 to repair the damage.
‘Professor Edward Lewis used data from four independent populations exposed to radiation to demonstrate that the incidence of leukemia was linearly related to the accumulated dose of radiation. ... Outspoken scientists, including Linus Pauling, used Lewis’s risk estimate to inform the public about the danger of nuclear fallout by estimating the
number of leukemia deaths that would be caused by the test detonations. In May of 1957 Lewis’s analysis of the radiation-induced human leukemia data was published as a lead article in Science magazine. In June he presented it before the Joint Committee on Atomic Energy of the US Congress.’ – Abstract of thesis by Jennifer Caron, Edward Lewis and Radioactive Fallout: the Impact of Caltech Biologists Over Nuclear Weapons Testing in the 1950s and 60s, Caltech, January 2003.
Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), discrediting the pseudo-science from geneticist Edward Lewis on pages 61, and 78-79:
‘... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls. ...
‘What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10-6 like the others in the table. But the value of 2 x 10-6 was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year. ...
‘All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis’s work as others have done, “It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation” [Leading article in Science, v125, p963, 1957]. This is just wishful journalese.
‘The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different.’
Tragically, the entire health physics industry has conned itself for political reasons into ignoring the vitally important effect of dose rate, and all their current radiation dosimeters just measure the accumulated dose without accurately determining the effective dose rate. It is the dose rate which determines whether DNA repair mechanisms can cope with the rate at which damage occurs (or even be stimulated to greater activity and positive benefit), or whether the rate at which radiation damage occurs is sufficient to saturate the natural repair mechanisms.
All future radiation dosimeters must incorporate some kind of integral to determine the effective dose rate at which doses are accumulated. A Fourier-spectrum type record (showing the proportion of the dose received as a function of dose rate) is needed in electronic dosimeters because, in real life, dose rates are never constant but vary with time as the distance to contamination and shielding vary, and as radioactive decay occurs.
All radioactive materials decay exponentially unless they are being formed from the decay of something else in a decay chain, such as many fission products and the decay chains of actinides like uranium and plutonium.
But for the case of simple exponential decay, the mathematical exponential decay law predicts that the dose rate never reaches zero, so effective dose rate for exposure to an exponentially decaying source needs clarification: taking an infinite exposure time will obviously underestimate the dose rate regardless of the total dose, because any dose divided into an infinite exposure time will give a false dose rate of zero.
Part of the problem here is that the exponential decay curve is false: it is based on calculus for continuous variations, and doesn't apply to radioactive decay which isn't continuous but is a discrete phenomenon. This mathematical failure undermines the interpretation of real events in quantum mechanics and quantum field theory, because discrete quantized fields are being falsely approximated by the use of the calculus, which ignores the discontinuous (lumpy) changes which actually occur in quantum field phenomena, e.g., as Dr Thomas Love of California State University points out, the 'wavefunction collapse' in quantum mechanics when a radioactive decay occurs is a mathematical discontinuity due to the use of continuously varying differential field equations to represent a discrete (discontinuous) transition!
Alpha radioactive decay occurs when an alpha particle undergoes quantum tunnelling to escape from the nucleus through a 'field barrier' which should confine it perfectly, according to classical physics. But as Professor Bridgman explains, the classical field law falsely predicts a definite sharp limit on the distance of approach of charged particles, which is not observed in reality (in the real world, there is a more gradual decrease). The explanation for alpha decay and 'quantum tunnelling' is not that the mathematical laws are perfect and nature is 'magical and beyond understanding', but simply that the differential field law is just a statistical approximation and wrong at the fundamental level: electromagnetic forces are not continuous and steady on small scales, but are due to chaotic, random exchange radiation, which only averages out and approaches the mathematical 'law' over long distances or long times. Forces are actually produced by lots of little particles, quanta, being exchanged between charges.
On large scales, the effect of all these little particles averages out to appear like Coulomb's simple law, just as on large scales, air pressure can appear steady, when in fact on small scales it is a random bombardment of air molecules which cause Brownian motion. On small scales, such as the distance between an alpha particle and other particles in the nucleus, the forces are not steady but fluctuate as the field quanta are randomly and chaotically exchanged between the nucleons. Sometimes it is stronger and sometimes weaker than the potential predicted by the mathematical law. When the field confining the alpha particle is weaker, the alpha particle may be able to escape, so there is no magic to 'quantum tunnelling'. Therefore, radioactive decay only behaves the smooth exponential decay law as a statistical approximation for large decay rates. In general the exponential decay rate is false and for a nuclide of short half-life, all the radioactive atoms decay after a non-infinite time; the prediction of that 'law' that radioactivity continues forever is false. Richard P. Feynman explains in his book, QED, Penguin, 1990, pp. 55-6, and 84:
'I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas ... But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, "Your old-fashioned ideas are no damn good when ...". If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no need for an uncertainty principle! ... on a small scale, such as inside an atom, the space is so small that there is no main path, no "orbit"; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important ...'
There is a stunning lesson from human 'groupthink' arrogance today that Feynman's fact-based physics is still censored out by mainstream string theory, despite the success of path integrals based on this field quanta interference mechanism! The entire mainstream modern physics waggon has ignored Feynman's case for simplicity and understanding what is known for sure, and has gone off in the other direction (magical unexplainable religion) and built up a 10 dimensional superstring model whose conveniently 'explained' Calabi-Yau compactification of the unseen 6 dimensions can take 10500 different forms (conveniently explained away as a 'landscape' of unobservable parallel universes, from which ours is picked out using the anthropic principle that because we exist, the values of fundamental parameters we observe must be such that they allow our existence) to combine a non-experimentally justifiable speculation about forces unifying at the Planck scale, with another non-experimentally justifiable speculation that gravity is mediated by spin-2 particles which only exchange between the two masses in your calculation, and somehow avoid exchanging with the way bigger masses in the surrounding universe (when you include in your path integral the fact that exchange gravitons coming from distant masses will be converging inwards towards an apple and the earth, it turns out that this exchange radiation with distant masses actually predominates over the local exchange and pushes the apple to the earth, so gravitons can be deduced to be spin-1 not spin-2; this makes checkable predictions and tells us exactly how quantum gravity fits into the electroweak symmetry of the Standard Model, altering the usual interpretation, and radically changing the nature of electroweak symmetry breaking from the usual poorly predictive mainstream Higgs field).
IMPORTANT NOTICE ON LINEAR ENERGY TRANSFER (LET) AND ITS CONSEQUENCES FOR DIFFERING EFFECTS FROM ALPHA, BETA AND GAMMA RADIATIONS:
Just in case anyone ignorant of the basics of radiation reads this post, it should be explained that the data and conclusions given in this post apply to gamma and also neutron radiation, both of which are ELECTRICALLY NEUTRAL PARTICLES, which is the major threat from nuclear explosions. Because gamma rays and neutrons are UNCHARGED, they tend to be weakly ionizing (i.e., they penetrate easily, and deposit energy only in discrete events such as occasional collisions with orbital electrons and nuclei of atoms, e.g., the Compton effect for gamma rays striking electrons). Neutrons however can emulate charged radiations when they hit protons (hydrogen nuclei): the protons are electrically charged and can carry off much of the energy, behaving almost like alpha particles and causing 20 times as much damage as gamma rays for every unit of dose (Joule per kilogram). This correction factor of 20 for neutrons is termed the relative biological effectiveness (RBE). However, low-energy (well scattered or 'thermalized') neutrons are unlikely to be scattered by protons, and instead are captured by protons to form heavy hydrogen (deuterium); in this 'radiative capture' process the surplus energy is released in the form of gamma rays.
The weakly ionizing nature of gamma rays means that they deposit relatively little energy per unit length of their path through living matter, so are LOW-LET (Linear Energy Transfer) radiations. This is not the case with the ELECTRICALLY CHARGED alpha and beta particles from internal emitters like iodine-131 in the thyroid gland of people drinking milk without taking any precautions from cattle eating contaminated pasture grass, which occurred in Russia which didn't issue potassium iodide tablets to civilians after the Chernobyl disaster, but not in Poland where the tablets were issued (ion exchange removes iodine-131 from milk, as does turning it to cheese and storing it, because of the short 8-days half life of iodine-131, and if the predicted dose is above 25 cGy then the excess risk is negated by taking 130-milligram potassium iodide tablets to prevent the uptake of iodine-131); electrically charged alpha and beta particles in the body are stopped over a very small path length in tissue, and so they deposit all of their energy in that small amount of tissue which means that a single alpha or beta particle can potentially saturate the P53 DNA repair mechanism in a single cell nucleus and cause a cancer risk: hence, alpha and beta particles, because of their electrical charge, are HIGH-LET radiations, that probably have no threshold for effects, and are dangerous at all doses. On the positive side, this high-LET nature of alpha and beta particles means that they are not very penetrating, so they cause relatively little risk as long as you don't ingest or inhale contamination. Protective clothing and respirators can totally negate the risks from alpha and beta radiations during decontamination work after fallout. A report on beta skin burns from fallout on the Marshallese in 1954 is here, and calculations of the length of time over which fallout can cause beta burns when deposited on skin are reported here in the post about Dr Carl F. Miller's excellent fallout research. For proof of U.S. Department of Defense fallout exaggerations and the continuing cover-up of the limited extent of nuclear test fallout during the Cold War, see this post. For underwater burst contamination see this post. For data on the efficiency of decontamination, see this post as well of course as the post about Dr Miller's work.