Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), which examines in detail the evidence for leukemia induced by radiation as known 45 years ago.
Obviously at that time the human data collected from Hiroshima and Nagasaki was far from complete, for although the peak radiation induced leukemia rate occurred in 1951/52, some 6-7 years after exposure, the latent period was much longer in many cases. In any case, the doses which the survivors received were poorly known. Today, even the shielded doses (the calculation of radiation shielding is vital for those who survived in brick or concrete buildings around ground zero) are known quite well, the standard deviation is +/-30%.
These aren't just theoretical computer calculations. With very sensitive instruments and long counting periods, it is possible to measure neutron induced activity in iron from irradiated buildings in Japan, and assess the neutron exposure, while gamma ray energy stored in ceramics like roof tiles can be released as light by heating them. It's possible to do this in a calibrated way (e.g., you can measure the light emitted on heating and then irradiate the same sample with a known dose of radiation, and repeat the process, so that the comparison calibrates the original radiation dose to the material), and after allowing for background radiation you can find out the gamma ray doses from the bomb.
Because in 1962 there was little useful human data, extensive experiments were made on animals, in particular mice. Hence the title of Dr Loutit's book, Irradiation of Mice and Men.
What caught my eye was the section on pages 61-82 of factors relating to leukemia. On page 61 he states:
"... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls."
So for a fixed dose, 1,000 R spread over a month (which is far less lethal in short term effects than a that dose spread over a few seconds, as occurs with initial radiation in a nuclear explosion, or over a few days when most of the fallout dose is delivered), the leukemia rate can vary from 5-40% as the dose rate varies from 1.3-81 r/hour.
This does illustrate that the effects, even long-term effects, don't depend merely on the dose, but also upon the dose rate. There is a kind of religion that Health Physics is based upon, and has been based upon since about 1956, which states that the long-term effects of radiation are linearly dependent upon the total dose.
The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.
The only modification officially included for other factors are a set of radio-senstivity factors for different organs (those with fast-dividing cells are of course the most sensitive to radiation, since their DNA is highly vulnerable - while dividing - more of the time than other cells), and a quality factor for the type and energy of radiation (the amount of energy deposited in tissue per unit length of the track of a particle is the "linear energy transfer" or LET, and the higher the LET, the more ionisation type disruption a single particle causes to tissue while passing). In general, alpha radiation (helium nuclei, massive and highly charged) inside the body is high-LET radiation and so even a single alpha particle has a cancer risk, regardless of the dose rate, so the the cancer risk is simply proportional to dose as orthodoxy says.
The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.
The only modification officially included for other factors are a set of radio-senstivity factors for different organs (those with fast-dividing cells are of course the most sensitive to radiation, since their DNA is highly vulnerable - while dividing - more of the time than other cells), and a quality factor for the type and energy of radiation (the amount of energy deposited in tissue per unit length of the track of a particle is the "linear energy transfer" or LET, and the higher the LET, the more ionisation type disruption a single particle causes to tissue while passing). In general, alpha radiation (helium nuclei, massive and highly charged) inside the body is high-LET radiation and so even a single alpha particle has a cancer risk, regardless of the dose rate, so the the cancer risk is simply proportional to dose as orthodoxy says.
But for gamma and X-rays, which are low-LET radiation, you the amounts of ionization per unit length of the particle path is very small. Beta particle radiation is intermediate between alpha and gamma, and its effects are very dependent on the exact energy of the beta particles (strontium-90 emits high energy beta particles which can penetrate thin metal foils, for example, while the very low energy beta particles from carbon-14 are stopped by a sheet of paper, like alpha particles). In general gamma and X-ray radiation probably require multiple hits on a cell to overwhelm the DNA repair mechanism, so the dose rate is important.
There is no inclusion in radiation dose effects assessments of the effect of dose rate! But it has been known for about twenty years that protein P53 repairs most breaks in DNA due to radiation if the dose rate is low, but can be saturated and overwhelmed at high dose rates.
Hence, there is a known mechanism by which low dose rates are less likely to cause cancer than high dose rates.
Now the entire controversy on radiation effects hinges on the effects of low-LET gamma and X-rays received at different dose rates, which are ignored in favour of just quoting total doses and trying to correlate those total doses to the statistical effects observed. Why? Well, it is administratively convenient just to record one numerical value - dose. Having to work out both the dose and the dose rate, and to set limits according to some combination, would have been difficult in the past (it probably is not difficult today, because modern electronic dosimeters measure dose rates and doses, and could easily be programmed to record the mean dose rate at which the dose was received).
The data from Hiroshima and Nagasaki, and also for most X-ray and gamma ray medical treatment of people, applies to relatively high dose rates. You would expect this data to show more serious effects than similar doses received at lower dose rates.
There is also some effect of age on exposure. Dr Alice Stewart and associates made the discovery (Brit. Med. J., v1, p1495, 1958, and v1, p452, 1961) after trying to analyse the pre-natal X-ray exposures of all children who died from cancers, particular leukemia, before age 10 in England and Wales between 1953-5, that those who had been X-rayed in utero had an overall risk of dying from cancer before age 10 of 1/600, compared to 1/1,200 for non-exposed children. For children who had been exposed to X-rays while in utero, the peak leukemia and other cancer risk occurred at the age of 3, a shorter latent interval than for survivors of Hiroshima and Nagasaki. (The actual average X-ray dose received to each child in utero was estimated as 1-5 rads on page 82 of Loutit's Irradiation of Mice and Men.)
In Irradiation of Mice and Men, Loutit discusses a serious public relations problem caused by Professor E. B. Lewis, author of Leukemia and Ionizing Radiation, Science 17 May 1957, v125, No. 3255, pp. 965-72. Dr Loutit describes Lewis on page 78 as "... a geneticist of great renown...".
The problem is that Professor Lewis was largely responsible for ignoring dose rate effects. In his 1957 paper Lewis calculated the probability of inducing leukemia per individual rad dose per year, getting 0.000,002 for both radiologists (where doses were somewhat uncertain, due to inaccurate dosimetry until the 1950s) and also for whole body exposure of atomic bomb survivors (a very uncertain figure in 1957, due partly to the fact most cancers had not yet appeared, and partly to the massive uncertainty in the dosimetry for atomic bomb survivors at that time), 0.000,001 for patient groups whose spine was irradiated for ankylosing spondylitis and whose chests were irradiated for thymic enlargement.
Dr Loutit points out that all this data of Lewis' is for high dose rates. The problem is, Loutit writes on page 78 of Irradiation of Mice and Men:
'What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10^{-6} like the others in the table. But the value of 2 x 10^{-6} was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year."
So Professor Lewis has no evidence whatsoever that his data from human beings exposed to high dose rates also applied to low dose rates like background radiation. He merely assumed this was the case, without evidence or explanation or mechanism, and used this assumption to make some fanciful and uncheckable calculations, such as the guess that 10-20% of natural leukemias are caused by background radiation.
On page 79, Dr Loutit challenges all of Professor Lewis' assumptions, pointing out for example that the effect of age at exposure and the effects of dose rate are being totally ignored by Lewis:
"All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis's work as others have done, 'It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation' [Leading article in Science, v125, p963, 1957]. This is just wishful journalese.
"The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different."
This conclusion about the importance of dose rate has been totally ignored, and Lewis' fiddles based only on dose have been continued ever since. No wonder the data from groups exposed to similar doses (but at different dose rates) remain in conflict. It's no mystery. There's nothing unknown about radiation. It's just censorship and officialdom enforcing confusion, as shown by that 1957 editorial in Science, mentioned above. It's straightforward to see that induction of cancer depends to some extent on the saturation of the P53 repair mechanism (for damage to DNA) due to radiation dose rate. This factor is completely ignored in Lewis' linear, no-threshold (LNT) model based on the faulty early data available in 1957.
Notice that the dose rate varied with distance in Hiroshima and Nagasaki, since the duration of exposure was did not vary as rapidly with distance from ground zero as the dose did, so the casualties nearer to ground zero received their doses at higher dose rates.
W.L. Chen,Y.C. Luan, M.C. Shieh, S.T. Chen, H.T. , Kung, K.L. Soong, Y.C.Yeh, T.S. Chou, S.H. Mong, J.T.Wu, C.P. Sun,W.P. Deng, M.F.Wu, and M.L. Shen, ‘Is Chronic Radiation an Effective Prophylaxis Against Cancer?’, published in the Journal of American Physicians and Surgeons, Vol. 9, No. 1, Spring 2004, page 6, available in PDF format here:
‘An extraordinary incident occurred 20 years ago in Taiwan. Recycled steel, accidentally contaminated with cobalt-60 ([low dose rate, low-LET gamma radiation emitter] half-life: 5.3 y), was formed into construction steel for more than 180 buildings, which 10,000 persons occupied for 9 to 20 years. They unknowingly received radiation doses that averaged 0.4 Sv, a collective dose of 4,000 person-Sv. Based on the observed seven cancer deaths, the cancer mortality rate for this population was assessed to be 3.5 per 100,000 person-years. Three children were born with congenital heart malformations, indicating a prevalence rate of 1.5 cases per 1,000 children under age 19.
'The average spontaneous cancer death rate in the general population of Taiwan over these 20 years is 116 persons per 100,000 person-years. Based upon partial official statistics and hospital experience, the prevalence rate of congenital malformation is 23 cases per 1,000 children. Assuming the age and income distributions of these persons are the same as for the general population, it appears that significant beneficial health effects may be associated with this chronic radiation exposure. ...’
The statistics in the paper by Chen and others has been alleged to apply to a younger age group than the general population, affecting the significance of the data, although in other ways the data are more valid than Hiroshima and Nagasaki data extrapolations to low doses. For instance, the radiation cancer scare mongering of survivors of high doses in Hiroshima and Nagasaki would have been prejudiced in the sense of preventing a blind to avoid “anti-placebo” effect, e.g. increased fear, psychological stress and worry about the long term effects of radiation, and associated behaviour. The 1958 book about the Hiroshima and Nagasaki survivors, “Formula for Death”, makes the point that highly irradiated survivors often smoked more, in the belief that they were doomed to die from radiation induced cancer anyway. Therefore, the fear culture of the irradiated survivors would statistically be expected to result in a deviancy from normal behaviour, in some cases increasing the cancer risks above those due purely to radiation exposure.
For up-to-date data and literature discussions on the effects of DNA repair enzymes on preventing cancers from low-dose rate radiation, please see
http://en.wikipedia.org/wiki/Radiation_hormesis
‘What is Science?’ by Richard P. Feynman, presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City, and published in The Physics Teacher, vol. 7, issue 6, 1968, pp. 313-20:
‘... great religions are dissipated by following form without remembering the direct content of the teaching of the great leaders. In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers.
‘We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on, but these do not thereby become established science, established knowledge. They are merely an imitative form of science analogous to the South Sea Islanders’ airfields--radio towers, etc., made out of wood. The islanders expect a great airplane to arrive. They even build wooden airplanes of the same shape as they see in the foreigners' airfields around them, but strangely enough, their wood planes do not fly. The result of this pseudoscientific imitation is to produce experts, which many of you are. ... you teachers, who are really teaching children at the bottom of the heap, can maybe doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.’
Protein P53, discovered only in 1979, is encoded by gene TP53, which occurs on human chromosome 17. P53 also occurs in other mammals including mice, rats and dogs. P53 is one of the proteins which continually repairs breaks in DNA, which easily breaks at body temperature due to free radicals produced naturally in various ways and also as a result of ionisation of caused by radiation hitting water and other molecules in the body. Cancer occurs when several breaks in DNA happen to occur by chance at nearly the same time, giving several loose ends which P53 repairs incorrectly, causing a mutation. This cannot occur when only one break occurs, because only two loose ends are produced, and P53 will reattach them correctly. If low-LET ionising radiation levels are increased to a certain extent, causing more single strand breaks, P53 works faster and is able deal with faster breaks as they occur, so that multiple broken strand ends do not arise. This prevents DNA strands being repaired incorrectly, and prevents cancer - a result of mutation caused by faults in DNA - from arising. Too much radiation of course overloads the P53 repair mechanism, and then it cannot repair breaks as they occur, so multiple breaks begin to appear and loose ends of DNA are wrongly connected by P53, causing an increased cancer risk.
1. DNA-damaging free radicals are equivalent to a source of sparks which is always present naturally.
2. Cancer is equivalent the fire you get if the sparks are allowed to ignite the gasoline, i.e. if the free radicals are allowed to damage DNA without the damage being repaired.
3. Protein P53 is equivalent to a fire suppression system which is constantly damping out the sparks, or repairing the damaged DNA so that cancer doesn't occur.
In this way of thinking, the ‘cause’ of cancer will be down to a failure of a gene like P53 to repair the damage.
‘Professor Edward Lewis used data from four independent populations exposed to radiation to demonstrate that the incidence of leukemia was linearly related to the accumulated dose of radiation. ... Outspoken scientists, including Linus Pauling, used Lewis’s risk estimate to inform the public about the danger of nuclear fallout by estimating the
number of leukemia deaths that would be caused by the test detonations. In May of 1957 Lewis’s analysis of the radiation-induced human leukemia data was published as a lead article in Science magazine. In June he presented it before the Joint Committee on Atomic Energy of the US Congress.’ – Abstract of thesis by Jennifer Caron, Edward Lewis and Radioactive Fallout: the Impact of Caltech Biologists Over Nuclear Weapons Testing in the 1950s and 60s, Caltech, January 2003.
Dr John F. Loutit of the Medical Research Council, Harwell, England, in 1962 wrote a book called Irradiation of Mice and Men (University of Chicago Press, Chicago and London), discrediting the pseudo-science from geneticist Edward Lewis on pages 61, and 78-79:
‘... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls. ...
‘What Lewis did, and which I have not copied, was to include in his table another group - spontaneous incidence of leukemia (Brooklyn, N.Y.) - who are taken to have received only natural background radiation throughout life at the very low dose-rate of 0.1-0.2 rad per year: the best estimate is listed as 2 x 10-6 like the others in the table. But the value of 2 x 10-6 was not calculated from the data as for the other groups; it was merely adopted. By its adoption and multiplication with the average age in years of Brooklyners - 33.7 years and radiation dose per year of 0.1-0.2 rad - a mortality rate of 7 to 13 cases per million per year due to background radiation was deduced, or some 10-20 per cent of the observed rate of 65 cases per million per year. ...
‘All these points are very much against the basic hypothesis of Lewis of a linear relation of dose to leukemic effect irrespective of time. Unhappily it is not possible to claim for Lewis’s work as others have done, “It is now possible to calculate - within narrow limits - how many deaths from leukemia will result in any population from an increase in fall-out or other source of radiation” [Leading article in Science, v125, p963, 1957]. This is just wishful journalese.
‘The burning questions to me are not what are the numbers of leukemia to be expected from atom bombs or radiotherapy, but what is to be expected from natural background .... Furthermore, to obtain estimates of these, I believe it is wrong to go to [1950s inaccurate, dose rate effect ignoring, data from] atom bombs, where the radiations are qualitatively different [i.e., including effects from neutrons] and, more important, the dose-rate outstandingly different.’
Tragically, the entire health physics industry has conned itself for political reasons into ignoring the vitally important effect of dose rate, and all their current radiation dosimeters just measure the accumulated dose without accurately determining the effective dose rate. It is the dose rate which determines whether DNA repair mechanisms can cope with the rate at which damage occurs (or even be stimulated to greater activity and positive benefit), or whether the rate at which radiation damage occurs is sufficient to saturate the natural repair mechanisms.
All future radiation dosimeters must incorporate some kind of integral to determine the effective dose rate at which doses are accumulated. A Fourier-spectrum type record (showing the proportion of the dose received as a function of dose rate) is needed in electronic dosimeters because, in real life, dose rates are never constant but vary with time as the distance to contamination and shielding vary, and as radioactive decay occurs.
All radioactive materials decay exponentially unless they are being formed from the decay of something else in a decay chain, such as many fission products and the decay chains of actinides like uranium and plutonium.
But for the case of simple exponential decay, the mathematical exponential decay law predicts that the dose rate never reaches zero, so effective dose rate for exposure to an exponentially decaying source needs clarification: taking an infinite exposure time will obviously underestimate the dose rate regardless of the total dose, because any dose divided into an infinite exposure time will give a false dose rate of zero.
Part of the problem here is that the exponential decay curve is false: it is based on calculus for continuous variations, and doesn't apply to radioactive decay which isn't continuous but is a discrete phenomenon. This mathematical failure undermines the interpretation of real events in quantum mechanics and quantum field theory, because discrete quantized fields are being falsely approximated by the use of the calculus, which ignores the discontinuous (lumpy) changes which actually occur in quantum field phenomena, e.g., as Dr Thomas Love of California State University points out, the 'wavefunction collapse' in quantum mechanics when a radioactive decay occurs is a mathematical discontinuity due to the use of continuously varying differential field equations to represent a discrete (discontinuous) transition!
Alpha radioactive decay occurs when an alpha particle undergoes quantum tunnelling to escape from the nucleus through a 'field barrier' which should confine it perfectly, according to classical physics. But as Professor Bridgman explains, the classical field law falsely predicts a definite sharp limit on the distance of approach of charged particles, which is not observed in reality (in the real world, there is a more gradual decrease). The explanation for alpha decay and 'quantum tunnelling' is not that the mathematical laws are perfect and nature is 'magical and beyond understanding', but simply that the differential field law is just a statistical approximation and wrong at the fundamental level: electromagnetic forces are not continuous and steady on small scales, but are due to chaotic, random exchange radiation, which only averages out and approaches the mathematical 'law' over long distances or long times. Forces are actually produced by lots of little particles, quanta, being exchanged between charges.
On large scales, the effect of all these little particles averages out to appear like Coulomb's simple law, just as on large scales, air pressure can appear steady, when in fact on small scales it is a random bombardment of air molecules which cause Brownian motion. On small scales, such as the distance between an alpha particle and other particles in the nucleus, the forces are not steady but fluctuate as the field quanta are randomly and chaotically exchanged between the nucleons. Sometimes it is stronger and sometimes weaker than the potential predicted by the mathematical law. When the field confining the alpha particle is weaker, the alpha particle may be able to escape, so there is no magic to 'quantum tunnelling'. Therefore, radioactive decay only behaves the smooth exponential decay law as a statistical approximation for large decay rates. In general the exponential decay rate is false and for a nuclide of short half-life, all the radioactive atoms decay after a non-infinite time; the prediction of that 'law' that radioactivity continues forever is false. Richard P. Feynman explains in his book, QED, Penguin, 1990, pp. 55-6, and 84:
'I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas ... But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, "Your old-fashioned ideas are no damn good when ...". If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no need for an uncertainty principle! ... on a small scale, such as inside an atom, the space is so small that there is no main path, no "orbit"; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important ...'
There is a stunning lesson from human 'groupthink' arrogance today that Feynman's fact-based physics is still censored out by mainstream string theory, despite the success of path integrals based on this field quanta interference mechanism! The entire mainstream modern physics waggon has ignored Feynman's case for simplicity and understanding what is known for sure, and has gone off in the other direction (magical unexplainable religion) and built up a 10 dimensional superstring model whose conveniently 'explained' Calabi-Yau compactification of the unseen 6 dimensions can take 10500 different forms (conveniently explained away as a 'landscape' of unobservable parallel universes, from which ours is picked out using the anthropic principle that because we exist, the values of fundamental parameters we observe must be such that they allow our existence) to combine a non-experimentally justifiable speculation about forces unifying at the Planck scale, with another non-experimentally justifiable speculation that gravity is mediated by spin-2 particles which only exchange between the two masses in your calculation, and somehow avoid exchanging with the way bigger masses in the surrounding universe (when you include in your path integral the fact that exchange gravitons coming from distant masses will be converging inwards towards an apple and the earth, it turns out that this exchange radiation with distant masses actually predominates over the local exchange and pushes the apple to the earth, so gravitons can be deduced to be spin-1 not spin-2; this makes checkable predictions and tells us exactly how quantum gravity fits into the electroweak symmetry of the Standard Model, altering the usual interpretation, and radically changing the nature of electroweak symmetry breaking from the usual poorly predictive mainstream Higgs field).
IMPORTANT NOTICE ON LINEAR ENERGY TRANSFER (LET) AND ITS CONSEQUENCES FOR DIFFERING EFFECTS FROM ALPHA, BETA AND GAMMA RADIATIONS:
Just in case anyone ignorant of the basics of radiation reads this post, it should be explained that the data and conclusions given in this post apply to gamma and also neutron radiation, both of which are ELECTRICALLY NEUTRAL PARTICLES, which is the major threat from nuclear explosions. Because gamma rays and neutrons are UNCHARGED, they tend to be weakly ionizing (i.e., they penetrate easily, and deposit energy only in discrete events such as occasional collisions with orbital electrons and nuclei of atoms, e.g., the Compton effect for gamma rays striking electrons). Neutrons however can emulate charged radiations when they hit protons (hydrogen nuclei): the protons are electrically charged and can carry off much of the energy, behaving almost like alpha particles and causing 20 times as much damage as gamma rays for every unit of dose (Joule per kilogram). This correction factor of 20 for neutrons is termed the relative biological effectiveness (RBE). However, low-energy (well scattered or 'thermalized') neutrons are unlikely to be scattered by protons, and instead are captured by protons to form heavy hydrogen (deuterium); in this 'radiative capture' process the surplus energy is released in the form of gamma rays.
The weakly ionizing nature of gamma rays means that they deposit relatively little energy per unit length of their path through living matter, so are LOW-LET (Linear Energy Transfer) radiations. This is not the case with the ELECTRICALLY CHARGED alpha and beta particles from internal emitters like iodine-131 in the thyroid gland of people drinking milk without taking any precautions from cattle eating contaminated pasture grass, which occurred in Russia which didn't issue potassium iodide tablets to civilians after the Chernobyl disaster, but not in Poland where the tablets were issued (ion exchange removes iodine-131 from milk, as does turning it to cheese and storing it, because of the short 8-days half life of iodine-131, and if the predicted dose is above 25 cGy then the excess risk is negated by taking 130-milligram potassium iodide tablets to prevent the uptake of iodine-131); electrically charged alpha and beta particles in the body are stopped over a very small path length in tissue, and so they deposit all of their energy in that small amount of tissue which means that a single alpha or beta particle can potentially saturate the P53 DNA repair mechanism in a single cell nucleus and cause a cancer risk: hence, alpha and beta particles, because of their electrical charge, are HIGH-LET radiations, that probably have no threshold for effects, and are dangerous at all doses. On the positive side, this high-LET nature of alpha and beta particles means that they are not very penetrating, so they cause relatively little risk as long as you don't ingest or inhale contamination. Protective clothing and respirators can totally negate the risks from alpha and beta radiations during decontamination work after fallout. A report on beta skin burns from fallout on the Marshallese in 1954 is here, and calculations of the length of time over which fallout can cause beta burns when deposited on skin are reported here in the post about Dr Carl F. Miller's excellent fallout research. For proof of U.S. Department of Defense fallout exaggerations and the continuing cover-up of the limited extent of nuclear test fallout during the Cold War, see this post. For underwater burst contamination see this post. For data on the efficiency of decontamination, see this post as well of course as the post about Dr Miller's work.
http://motls.blogspot.com/2006/04/twenty-years-after-chernobyl.html
ReplyDeleteSaturday, April 29, 2006
Twenty years after Chernobyl
On Wednesday morning, it's been 20 years since the Chernobyl disaster... The communist regimes could not pretend that nothing had happened (although in the era before Gorbachev, they could have tried to do so) but they had attempted to downplay the impact of the meltdown. At least this is what we used to say for twenty years. You may want to look how BBC news about the Chernobyl tragedy looked like 20 years ago.
Ukraine remembered the event (see the pictures) and Yushchenko wants to attract tourists to Chernobyl. You may see a photo gallery here. Despite the legacy, Ukraine has plans to expand nuclear energy.
Today I think that the communist authorities did more or less exactly what they should have done - for example try to avoid irrational panic. It seems that only 56 people were killed directly and 4,000 people indirectly. See here. On the other hand, about 300,000 people were evacuated which was a reasonable decision, too. And animals are perhaps the best witnesses for my statements: the exclusion zone - now an official national park - has become a haven for wildlife - as National Geographic also explains:
Reappeared: Lynx, eagle owl, great white egret, nesting swans, and possibly a bear
Introduced: European bison, Przewalski's horse
Booming mammals: Badger, beaver, boar, deer, elk, fox, hare, otter, raccoon dog, wolf
Booming birds: Aquatic warbler, azure tit, black grouse, black stork, crane, white-tailed eagle (the birds especially like the interior of the sarcophagus)
... Greenpeace in particular are very wrong whenever they say that the impact of technology on wildlife must always have a negative sign. ...
In other words, the impact of that event has been exaggerated for many years. Moreover, it is much less likely that a similar tragedy would occur today. Nuclear power has so many advantages that I would argue that even if the probability of a Chernobyl-like disaster in the next 20 years were around 10%, it would still be worth to use nuclear energy.
Some children were born with some defects - but even such defects don't imply the end of everything. On the contrary. A girl from the Chernobyl area, born around 1989, was abandoned by her Soviet parents, was adopted by Americans, and she became the world champion in swimming. Her name? Hint: the Soviet president was Gorbachev and this story has something to do with the atomic nucleus. Yes, her name is Mikhaila Rutherford. ;-)
http://motls.blogspot.com/2007/04/chernobyl-21-years-later.html
Thursday, April 26, 2007
Chernobyl: 21 years later
Exactly 21 years ago, the Ukrainian power plant exploded. ...
A new study has found that the long-term health impact of the Chernobyl disaster was negligible. All kinds of mortality rates were at most 1% higher than normally.
ScienceDaily, full study.
Everyday life is riskier.
Yushchenko calls for a revival of the zone. His proposals include a nature preserve - which is more or less a fact now - as well as production of bio-fuels and a science center. The Korean boss of the U.N. calls for aid to the region.
copy of a fast comment there:
Environmental thinking is in perfect harmony with media hype.
Chernobyl wasn't the first case. Hiroshima was. A Manhatten District PhD physicist (Dr Jacobson, from memory?), who didn't actually work at Los Alamos and because of the compartmentalization of secrets didn't know anything about nuclear weapons effects, issued a press release about fallout the day after Hiroshima was on the front pages.
He wrote that the radioactivity would turn Hiroshima into a radioactive waste land for 75 years. Not 70 or 80 years, but 75 years, which is a bit weird bearing in mind the fact that radioactivity decays exponentially.
Actually there was no significant fallout or neutron induced activity beyond a few hours at Hiroshima due to the burst altitude. Even in a surface burst, the radioactivity drops to within the natural background at ground zero after a few years, and there are people living at Bikini Atoll today, where a 15 megatons surface burst was tested in 1954.
The effects of radiation are serious at high doses, but there is plenty of evidence that they are exaggerated for low doses of gamma and neutron radiation...
copy of another fast comment there:
The full report http://www.biomedcentral.com/1471-2458/7/49/ states: "The ICRP risk estimate assumes a dose and dose-rate effectiveness factor (DDREF) of 2.0 (reducing predicted risk by a factor of 2.0) for extrapolation of the data from the bomb survivors (who were exposed at extremely high dose rate) to lower dose and/or dose-rate exposures."
This is a vital issue, because cancer occurs when when the damage to DNA occurs so quickly that protein P53 can't repair it as single strand breaks. As soon as you get double breaks of DNA, there is the risk of the resulting bits of loose DNA being "repaired" the wrong way around in the strand by protein P53, and this can cause radiation induced cancer.
So at low dose rates to weakly ionizing (low linear energy transfer, or low LET) radiation like gamma rays, radiation causes single breaks in DNA and protein P53 has time to repair them before further breaks occur.
At high dose rates, the breaks occur so quickly that the P53 repair mechanism is overloaded with work, and repairs go wrong because DNA gets fairly fragmented (not just two loose ends to be reattached, but many bits) and P53 then accidentally puts some of the ends "back" in the wrong places, causing the risk of cancer.
The factor of 2 risk increase for high dose rates as opposed to low dose rates is nonsense; it's a far bigger factor, as Dr Loutit explained in his ignored book "Irradiation of Mice and Men" in 1962. On page 61 he states:
"... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls."
So, for a fixed dose of 1,000 R spread over a month (which is far less lethal in short term effects than a that dose spread over a few seconds, as occurs with initial radiation in a nuclear explosion, or over a few days when most of the fallout dose is delivered), the leukemia rate can vary from 5-40% as the dose rate varies from 1.3-81 r/hour.
The cancer rate doesn't just double at high dose rates. It increases by a factor of 8 (i.e., 5% to 40%) as the dose rate rises from 1.3 to 81 r/hour.
In fact, for comparing cancer risks at low level (near background) and Hiroshima, the dose rates cover a wider range that this experiment, so the correction factor for the effect of dose rate on risk will be bigger than 8.
Background radiation studies are based on average exposure rates of just 0.02 mr/hour, i.e., 0.00002 r/hour, while at Hiroshima and similar instrumented nuclear tests, the initial nuclear radiation lasted a total of 20 seconds until it was terminated by bouyant cloud rise effect.
Hence for a dose of 1 r spread over 20 seconds at Hiroshima, the dose rate it was received at was 180 r/hour. (Although according to Glasstone and Dolan'snuclear test data, half of the initial radiation dose would generally be received in about half a second, so the effective dose rate would be even higher than 180 r/hour.)
Hence, the range of dose rates from bachground to Hiroshima is 0.00002 r/hour to 180 r/hour or more, a factor of 9,000,000 difference or more.
Since in the animal experiments the leukemia rate increased by a factor of 8 due to a 62 fold increase in dose rate (i.e., as the dose rate increased from 1.3 to 81 r/hour), cancer risk is approximately proportional to the square root of the dose rate, so a 9,000,000 fold increase in dose rate should increase the cancer risk by 3,000 times.
Hence, the cancer risks at Hiroshima and Nagasaki by this model exaggerate low level radiation effects by over 3,000 times, not merely by a factor of 2.
Update 28 April 2007: the last comment above contains an error and the exaggeration of radiation effects at low dose rates is even greater as a result.
ReplyDeleteThe calculation should have have subtracted the 2% leukemia incidence in the non-irradiated control group from both the 40% and 5% figures. Hence, the radiation induced leukemia for 1000 R received at rates of 1.3 to 81 r/hour ranged from 3% to 38%, not 5% to 40%. This means that a 62.3 fold increase in dose rate increased the leukemia rate due to the radiation by a factor of 38/3 = 12.7. Hence, the radiation-induced (not the total) leukemia incidence is proportional to (dose rate)^{0.615}, instead of (dose rate)^{1/2}.
Using this corrected result for a 9 million fold difference between the dose rates of background (low dose rate) and Hiroshima (high dose rate) radiation, the radiation induced leukemia incidence for a similar total radiation dose will increase by a factor of 18,900, not 3,000.
Hence, radiation-induced leukemia rates currently being extrapolated from Hiroshima and Nagasaki data down to low dose rate radiation will exaggerate by a factor of 18,900 or so, rather than the factor of 2 currently assumed by orthodoxy.
"... gamma ray energy stored in ceramics like roof tiles can be released as light by heating them. It's possible to do this in a calibrated way (e.g., you can measure the light emitted on heating and then irradiate the same sample with a known dose of radiation, and repeat the process, so that the comparison calibrates the original radiation dose to the material), and after allowing for background radiation you can find out the gamma ray doses from the bomb. ..."
ReplyDeleteI should have used the technical term "thermoluminescence" for this dosimetry. (Thermoluminescence is a standard form of dosimetry now, together with film badges, quartz fibre dosimeters, and battery-dependent electronic counters.)
It is even used in dating old pieces of china and pottery:
"Thermoluminescence (TL) dating is the determination by means of measuring the accumulated radiation dose of the time elapsed since material containing crystalline minerals was either heated (lava, ceramics) or exposed to sunlight (sediments). As the material is heated during measurements, a weak light signal, the thermoluminescence, proportional to the radiation dose is produced.
"Natural crystalline materials contain imperfections: impurity ions, stress dislocations, and other phenomena that disturb the regularity of the electric field that holds the atoms in the crystalline lattice together. This leads to local humps and dips in its electric potential. Where there is a dip (a so called 'electron trap'), a free electron may be attracted and trapped. The flux of ionizing radiation—both from cosmic radiation and from natural radioactivity—excites electrons from atoms in the crystal lattice into the conduction band where they can move freely. Most excited electrons will soon recombine with lattice ions, but some will be trapped, storing part of the energy of the radiation in the form of trapped electric charge. Depending on the depth of the traps (the energy required to free an electron from them) the storage time of trapped electrons will vary- some traps are sufficiently deep to store charge for hundreds of thousands of years.
"In thermoluminescence dating, these long-term traps are used to determine the age of materials: When irradiated crystalline material is again heated or exposed to strong light, the trapped electrons are given sufficient energy to escape. In the process of recombining with a lattice ion, they lose energy and emit photons (light quanta), detectable in the laboratory. The amount of light produced is proportional to the number of trapped electrons that have been freed which is in turn proportional to the radiation dose accumulated. In order to relate the signal (the thermoluminescence—light produced when the material is heated) to the radiation dose that caused it, it is necessary to calibrate the material with known doses of radiation since the density of traps is highly variable.
"Thermoluminescence dating presupposes a "zeroing" event in the history of the material, either heating (in the case of pottery or lava) or exposure to sunlight (in the case of sediments), that removes the pre-existing trapped electrons. Therefore, at that point the thermoluminescence signal is zero. As time goes on, the ionizing radiation field around the material causes the trapped electrons to accumulate. In the laboratory, the accumulated radiation dose can be measured, but this by itself is insufficient to determine the time since the zeroing event. The radiation dose rate - the dose accumulated per year-must be determined first. This is commonly done by measurement of the alpha radioactivity (the uranium and thorium content) and the potassium content (K-40 is a beta and gamma emitter) of the sample material. Often the gamma radiation field at the position of the sample material is measured, or it may be calculated from the alpha radioactivity and potassium content of the sample environment, and the cosmic ray dose is added in. Once all components of the radiation field are determined, the accumulated dose from the thermoluminescence measurements is divided by the dose accumulating each year, to obtain the years since the zeroing event.
"Thermoluminescence dating is used for material where radiocarbon dating is not available, like sediments. Its use is now common in the authentication of old ceramic wares, for which it gives the approximate date of the last firing.
"Optical dating is a related measurement method which replaces heating with exposure to intense light. The sample material is illuminated with a very bright source of infrared light (for feldspars) or green or blue light (for quartz). Ultraviolet light emitted by the sample is detected for measurement."
- Wikipedia
Just to further clarify an earlier comment above:
ReplyDelete(1) The allegation that nothing would grow in Hiroshima and Nagasaki for 75 years (total nonsense, because many plants survived the blast, the thermal and the nuclear radiation with just leaf scorching), was actually a false claim due to Dr Harold Jacobsen who worked on the Manhattan Project (but not on nuclear weapons effects); it was published in Washington Post on 8 August 1945, two days after the nuclear attack on Hiroshima and one day before the nuclear attack on Nagasaki. It was denounced and repudiated by Los Alamos physicists who actually knew the effects of nuclear weapons from the Trinity nuclear test in New Mexico on 16 July 1945 as extrapolated to the higher burst altitudes of the attacks on Japanese cities. (Dr Jacobsen was totally ignorant of height of burst effects on local fallout dose rates, just as all the media still are! There is a desperate need to produce a graph of maximum 1-hour dose rate at ground zero versus height of burst, with all nuclear test data, with different curves plotted for different bomb yields, to clarify this situation for once and for all using solid observed facts. By the time the rain occurred due to the moisture carried up to cold air by the firestorm beginning 20 minutes after the explosion, the mushroom cloud had been blown miles downwind so there was only a negligible trace of diffused fission product debris washed out of the air by the fire storm precipitation in the soot-filled "black rain". This was a trivial source of radiation compared to the neutron-induced activity in soil and buildings near ground zero, and in turn even that exposure was trivial compared to the initial (flash) exposures from the fireball before it rose to high altitudes about 20 seconds after detonation. Hence, the far-and-away predominant doses were from initial nuclear radiation due to gamma rays and neutrons from the rising fireballs, not due to residual contamination on the ground. Of course, the media can't ever grasp the distinction between initial radiation lasting 20 seconds and residual contamination which continues to act at an ever-decreasing rate for longer periods.)
(2) Beneficial effects (i.e. radiation hormesis) of sub-lethal radiation on plants were actually reported as early as 1898, two years after the discovery of radioactivity by Henri Becquerel:
G. F. Atkinson, "Report upon some preliminary experiments with Roentgen rays in plants", Science, v7, 1898, p7.
Actually, the article just mentioned in the comment above used X-rays (Roentgen rays), which are similar to gamma rays from radioactivity but are produced by high voltage tubes utilising electron collisions upon a metal target. X-rays were discovered on 8 November 1895 by Wilhelm Conrad Röntgen (Roentgen), whereas radioactivity from uranium was discovered when Becquerel in January 1896 exposed light proofed photographic plates (wrapped in black paper) to a fluorescent salt of uranium, potassium uranyl sulfate, in the hope of determining whether the fluorescent radiation emitted by the salt in bright sunlight would be able to penetrate the black paper like X-rays can (as discovered by Roentgen, whose work Becquerel was hoping to extent). Roentgen has discovered that penetrating, invisible rays capable of causing fluorescent substances to glow were transmitted to some distance from a high-voltage cathode ray (electron collision) tube. Becquerel was thus working on the back of Roentgen's discovery. The brains of Becquerel was that when he accidentally found the photographic plates has been fogged without the fluorescent substance having been exposed to sunlight, he jumped to the hypothesis that the uranium salt is always emitting invisible radiation that can cause film to fog, and then he tested this hypothesis carefully by further experiments to justify his conclusion that uranium is always emitting invisible X-ray-like radiation, regardless of being exposed to sunlight or not. Several other chemists and physicists afterwards published the claim that they had noticed that photographic film stored near uranium salts and minerals were fogged, but they had all dismissed it at the time as freak accident of some sort, without either formulating theory that uranium is radioactive, or testing that theory to ensure that it is correct.
ReplyDeleteSimilarly, several physicists before Roentgen afterwards claimed to have observed things glow some distance from high-voltage Crookes tubes, but none of them had devised a correct theory of what caused it, let alone tested the theory to confirm it properly and publish the result.
The reluctance of many mainstream scientists to deliberately ignore "anomalies" instead of disappearing over the past century, as persisted and there are many discoveries awaiting in physics, just requiring a change of paradigm.
One "problem" with the theory of X-rays and radioactivity was that it showed that things hitherto believed solid were actually not solid: hence the X-rays Roentgen took of people's hands were required to dramatically overcome prejudice which would have lasted perhaps forever if he hadn't illustrated his discovery so vividly and allowe it to be used straight away for immensely important medical purposes (locating fractures in broken bones, for example). If Roentgen had just made claims in words and equations, he wouldn't have been taken so seriously. It was the same with Einstein's relativity, which was ignored by the media until Eddington in 1919 photographed the deflection of starlight by gravity during an eclipse. That was when "relativity" (actually general relativity) was first hyped as front-page news in the media, some 14 years after Einstein's first publications.
The comment above should have been proof-read more carefully. E.g.
ReplyDelete"The reluctance of many mainstream scientists to deliberately ignore "anomalies" instead of disappearing over the past century, as persisted and there are many discoveries awaiting in physics, just requiring a change of paradigm."
Should of course read:
"The reluctance of many mainstream scientists to deliberately investigate "anomalies" instead of disappearing over the past century, has persisted and there are many discoveries awaiting in physics, just requiring a change of paradigm to enable them to be correctly interpreted as implying new science, and experiments to confirm that new discovery (I'm of course thinking of fundamental physics, e.g. quantum gravity in particular)."
Some historically scientific material of great relevance to this post is to be found in the book by physicists Dr. Edward Teller and Dr. Albert L. Latter, Our Nuclear Future ... Facts, Dangers and Opportunities, Criterion Books, New York, 1958:
ReplyDeleteA very prescient passage from page 119:
"It is possible that radiation of less than a certain intensity does not cause bone cancer or leukemia at all. In the past small doses of radiation have often been regarded as beneficial. This is not supported by any scientific evidence [as of 1958]. Today many well-informed people believe [without any evidence, i.e. on the basis of pure ignorance] that radiation is harmful even in the smallest amounts. This statement has been repeated [by mainstream "professional" cranks, who haven't grasped the subtle difference between fact-based science and authority-based religion/belief, and that no amount of "professional" dogma can overrule the need for fact based evidence in science, unlike subjective fields like politics/education/religion, where the student must give answers in exams which confirm to groupthink ideology, not to the facts if the facts are different to the mainstream consensus behind the examinations board; students who pass such exams by giving the "right" answers to subjective controversies are often instilled with a confusion between what is fact and what is speculative dogma, and as a result they defend crackpot mainstream beliefs as if those beliefs were science, not lies: the only way they have to defend such lies is by personal abuse of those who factual evidence and by lying, since they have no factual evidence, no scientific basis for arguing their case, just vacuous assertions based on ignorance and a refusal to read the facts and act upon them] in an authoritive manner. Actually there can be little doubt that radiation hurts the individual cell. But a living being is a most complex thing. Damage to a small fraction of the cells might be beneficial to the whole organism. Some experiments on mice seem to show that exposure to a little radiation increases the life expectancy of the animals. Scientific truth is firm - when it is complete. The evidence of what a little radiation will do in a complex animal like a human being is in an early and uncertain state."
On pages 121-122, the book points out that Denver in the United States is at an altitude of 5000 feet above sea level, and so receives 43% more hazardous cosmic radiation (because there is less air shield between it and outer space) than you get at sea level.
The bone cancer and leukemia rates in Denver, where the 5000 feet altitude caused a 43% increase in cosmic radiation, were significantly lower than those in the sea level cities of San Francisco and New Orleans in 1947 (before any nuclear test fallout arrived).
For example, there were 10.3 leukemia cases diagnosed per 100,000 of population in San Francisco in 1947, and only 6.4 in Denver.
On page 122, Drs. Teller and latter analyse the results as follows:
"One possible explanation for the lower incidence of bone cancer and leukemia in Denver is that disruptive processes like radiation are not necessarily harmful in small enough doses. Cell deterioration and regrowth go on all the time in living creatures. A slight acceleration of these processes could conceivably be beneficial to the organism."
Actually, the mechanism is more subtle: protein P53, discovered only in 1979, is encoded by gene TP53 which occurs on human chromosome 17. P53 also occurs in other mammals including mice, rats and dogs. P53 continually repairs breaks in DNA which easily breaks at body temperature due to free radicals produced naturally in various ways and also as a result of ionisation of caused by radiation hitting water and other molecules in the body. Cancer occurs when several breaks in DNA happen to occur by chance at nearly the same time, giving several loose ends which P53 repairs incorrectly, causing a mutation. This cannot occur when only one break occurs, because only two loose ends are produced, and P53 will reattach them correctly. If low-LET ionising radiation levels are increased to a certain extent, causing more single strand breaks, P53 works faster and is able deal with faster breaks as they occur, so that multiple broken strand ends do not arise. This prevents DNA strands being repaired incorrectly, and prevents cancer - a result of mutation caused by faults in DNA - from arising. Too much radiation of course overloads the P53 repair mechanism, and then it cannot repair breaks as they occur, so multiple breaks begin to appear and loose ends of DNA are wrongly connected by P53, causing an increased cancer risk. Obviously there is a statistical risk. Quite a lot of wrongly reassembled broken DNA needs to occur until the result causes cancer. Many wrongly assembled DNA strands simply result in cell death when it tries to divide, instead of allowing endless divisions into defective cells, i.e. cancer cells. Besides P53, there are other proteins involved in DNA repair after damage. Over 50% of all cancers, however, result from mutated forms of P53 which are unable to repair damaged DNA.
So it is clear that most cancers occur as a result of a rapid double break to the TP 53 gene on human chromosome 17. The cell then divides normally, but the resulting cell produces its P53 from a mutated TP 53 gene and thus produces a flawed P53 protein, which is unable to properly repair further damage in the DNA of the cell. As a result, these cells are subjected to cumulative damage and mutations from free radicals, and are relatively likely to become cancer cells. The stimulation of P53 with low-LET (weakly ionising) radiation can boost it's efficiency, preventing multiple strand breaks from having time to occur because breaks get repaired faster before a backlog can accumulate. This is a homeostasis effect: an increase in the rate of low-LET radiation weak ionisation naturally causes the body to slightly over-respond by increasing in a non-linear response the rate of P53 repairs (similarly, the body over-responds for a long time after an infection by boosting the white cell count to levels higher than those which existed before the infection). This disproportionately or over-compensation boosts the body's ability to cope with other causes of DNA damage, such as natural causes, so the net effect is a reduction in natural cancer rates that far outweighs the trivial radiation damage at low dose rates. Hence, the overall cancer risk at low-LET low dose rate radiation is less than it would be in the absence of the radiation.
Teller and Latter then point out that if there is an effect of the enhanced cosmic radiation in Denver on the leukemia and bone cancer rate as compared to lower altitude cities, "the effect is too small to be noticed compared to other effects."
In other words, this factual data as of 1947 set a limit on how bad the radiation-induced leukemia rate could be: if it existed at all, it was dwarfed by "noise" in the data. Whenever some signal gets drowned by "noise" in data, then the real scientist starts to investigate the "noise" which is more important than the trivial signal. (This was directly how the big bang was confirmed, when the microwave background noise in the sky was investigated in the mid-1960s and found to be severely red-shifted fireball radiation from the big bang.)
On page 124, it is pointed out that mortality statistics - which don't show a decrease in cancer risks from living in places of high cosmic radiation exposure like Denver - and which therefore don't show any negative risks from low level radiation, do show correlations between other things. For example, being 10% overweight reduces life expectancy by 1.5 years, while smoking one pack of cigarettes a day reduces life expectancy by 9 years (equivalent to an average of 15 minutes reduction in life per cigarette smoked).
These are things which are real, statistically significant risks. Low-LET radiation at low dose rates isn't that kind of problem (to say the very least of it).
For more about Lewis's non-threshold propaganda campaign "and the debate about nuclear weapons testing", see:
ReplyDelete>http://etd.caltech.edu/etd/available/etd-03292004-111416/unrestricted/LewisandFallout.pdf
EDWARD LEWIS AND RADIOACTIVE FALLOUT
THE IMPACT OF CALTECH BIOLOGISTS ON THE DEBATE
OVER NUCLEAR WEAPONS TESTING IN THE 1950s AND 60s
Thesis by
Jennifer Caron
In Partial Fulfillment of the Requirements for the
degree of
Bachelor of Science
Science, Ethics, and Society Option
CALIFORNIA INSTITUTE OF TECHNOLOGY
Pasadena, California
2003
(Presented January 8, 2003)
"ACKNOWLEDGEMENTS
Professor Ed Lewis, I am deeply grateful to you for sharing your story and spending
hours talking to me. ...
"ABSTRACT
The work of Caltech biologists, particularly, Edward Lewis, on leukemia and ionizing radiation transformed the public debate over nuclear weapons testing. The United States began testing hydrogen bombs in 1952, sending radioactive fallout around the globe. Earlier more localized fallout was generated starting in 1945 from tests of atomic weapons at Nevada test sites. The Atomic Energy Commission claimed the tests would not harm human health. Geneticists knew from animal and plant experiments that radiation can cause both illness and gene mutations. They spoke out to warn the policymakers and the public. Edward Lewis used data from four independent populations
exposed to radiation to demonstrate that the incidence of leukemia was linearly related to
the accumulated dose of radiation. He argued that this implied that leukemia resulted from a somatic gene mutation. Since there was no evidence for the existence of a
threshold for the induction of gene mutations down to doses as low as 25 r, there was unlikely to be a threshold for the induction of leukemia. This was the first serious challenge to the concept that there would be a threshold for the induction of cancer by
ionizing radiation. Outspoken scientists, including Linus Pauling, used Lewis’s risk
estimate to inform the public about the danger of nuclear fallout by estimating the
number of leukemia deaths that would be caused by the test detonations. In May of 1957
Lewis’s analysis of the radiation-induced human leukemia data was published as a lead article in Science magazine. In June he presented it before the Joint Committee on Atomic Energy of the US Congress." (Emphasis added to key points.)
Page 13:
"The most controversial aspect of his analysis was the linear dose-response curve. This relationship made sense to geneticists who had found a linear relationship between
radiation and mutations in Drosophila down to 25 rad (Stern and Spencer). Additionally, it fit with the hypothesis of Muller that cancer could result from somatic mutations. This was not the accepted idea in other scientific and medical communities. Rather, as the official voice, the AEC medical doctors and scientists promoted the assumption that there
would be a threshold below which radiation would do no harm, just as there is frequently such a threshold in chemical toxicology because the body can process small quantities of toxins like alcohol. The AEC vocally assumed and defended the threshold hypothesis;
furthermore, they seem to have assumed that the amount of radiation received by Americans from fallout would be less than the threshold. Lewis found no evidence for such a threshold, and the AEC scientists were unable to offer any."
(Emphasis added to Lewis's ignorant failure to discover the facts about low level radiation, and its pseudoscientific abuse or misinterpretation as being a fact rather than an expression of science-abusing ignorance and scientific failure. If a scientist fails to find evidence which in fact does exist, that is hardly an accomplishment to be hyped or applauded. Lewis failed to find the evidence of a threshold because the dosimetry available from Hiroshima and Nagasaki was then too crude and inaccurate to produce accurate, detailed results. If Lewis had made efforts to obtain the facts instead of pretending that ignorant error was fact and going on a crusade to promote such ignorant error in journals like Science and in testimony to U.S. Congressional Hearings, then he would have been doing science not pseudoscience.)
copy of a comment to
ReplyDeletehttp://backreaction.blogspot.com/2008/05/nuclear-power-return-of.html
"Nuclear's OK, but cars can't run on nuclear, so how can that really be a solution?" - Andrew
Nuclear power doesn't burn fossil fuels, which leaves more of those fuels for powering the internal combustion engine rather than generating electricity.
Cars can eventually (when fossil fuel costs make the price of gasoline too much for most people to afford) be fitted with electric motors run on electricity using efficient, low-weight rechargable lithium-ion batteries, and these can be recharged from mains supplied by nuclear reactors.
Obviously, electric trains can run on nuclear generated electricity without any interim battery storage.
The thing about nuclear power is that it is excessively expensive due to excessive safety precautions, and it is also a victim of lying propaganda from the environmental lobby which doesn't understand nuclear power in the proper context of natural radiation background levels and natural radon gas hazards, or even the naturally proved storage of intense radioactive waste for billions of years!
Fission products have been proved to be safely confined with only a few feet migration over a time span of 1.7 billion years, as a result of the intense natural nuclear reactors in concentrated uranium ore seams at Oklo, in Gabon:
"Once the natural reactors burned themselves out, the highly radioactive waste they generated was held in place deep under Oklo by the granite, sandstone, and clays surrounding the reactors’ areas. Plutonium has moved less than 10 feet from where it was formed almost two billion years ago."
- http://www.ocrwm.doe.gov/factsheets/doeymp0010.shtml
Data from Hiroshima and Nagasaki is strongest (most evidence) for low doses, where it shows a suppression and a threshold for such low-LET (linear energy transfer) radiation like gamma rays. See my post here for a discussion of the extensive evidence.
High-LET radiation like alpha particles deposits a lot of energy per unit length of path of the radiation through tissue, and this can overcome the natural protein P53 repair mechanism which sticks broken DNA fragments back together. In fact, the main cancer risk occurs in multiple DNA strand breaks, where bits of DNA end up being stuck back together in the wrong sequence, either killing the cell when it later tries to divide, or more seriously causing cancer when the cell divides in a damaged form which is out of control and causes a tumour.
But such high-LET radiation like alpha particles are only a hazard internally, such as when radioactive material is inhaled or ingested. The alpha particle emitter plutonium in a nuclear reactor is inside sealed aluminium fuel pellets and at no time is such waste a serious inhalation or ingestion hazard.
Gamma radiation, from evidence at Hiroshima and Nagasaki, as well as the Taiwan incident where 180 buildings lived in by 10,000 people for 20 years were constructed of steel which accidentally included intensely radioactive cobalt-60 from discarded radiotherapy sources, is low-LET radiation which does exhibit a threshold before any excess cancer risk (predominantly leukemia) shows up. There is evidence that the exact threshold dose effect for low-LET radiations such as gamma radiation depends on the dose rate at which the radiation is received, and not merely on the total dose. If the dose rate is producing DNA damage at a rate which is lower than the maximum rate at which P53 can repair DNA strand breaks, no excess cancer (above the natural cancer rate) occurs. The cancer risk depends on the proportion of the radiation dose which is above this threshold, and is proportional to that dose received at a rate exceeding the repairable DNA damage rate.
W.L. Chen,Y.C. Luan, M.C. Shieh, S.T. Chen, H.T. , Kung, K.L. Soong, Y.C.Yeh, T.S. Chou, S.H. Mong, J.T.Wu, C.P. Sun,W.P. Deng, M.F.Wu, and M.L. Shen, Is Chronic Radiation an Effective Prophylaxis Against Cancer?, published in the Journal of American Physicians and Surgeons, Vol. 9, No. 1, Spring 2004, page 6, available in PDF format here:
'An extraordinary incident occurred 20 years ago in Taiwan. Recycled steel, accidentally contaminated with cobalt-60 ([low dose rate, low-LET gamma radiation emitter] half-life: 5.3 y), was formed into construction steel for more than 180 buildings, which 10,000 persons occupied for 9 to 20 years. They unknowingly received radiation doses that averaged 0.4 Sv, a collective dose of 4,000 person-Sv. Based on the observed seven cancer deaths, the cancer mortality rate for this population was assessed to be 3.5 per 100,000 person-years. Three children were born with congenital heart malformations, indicating a prevalence rate of 1.5 cases per 1,000 children under age 19.
'The average spontaneous cancer death rate in the general population of Taiwan over these 20 years is 116 persons per 100,000 person-years. Based upon partial official statistics and hospital experience, the prevalence rate of congenital malformation is 23 cases per 1,000 children. Assuming the age and income distributions of these persons are the same as for the general population, it appears that significant beneficial health effects may be associated with this chronic radiation exposure. ...
'The data on reduced cancer mortality and congenital malformations are compatible with the phenomenon of radiation hormesis, an adaptive response of biological organisms to low levels of radiation stress or damage; a modest overcompensation to a disruption, resulting in improved fitness. Recent assessments of more than a century of data have led to the formulation of a well founded scientific model of this phenomenon.
'The experience of these 10,000 persons suggests that long term exposure to [gamma]radiation, at a dose rate of the order of 50 mSv (5 rem) per year, greatly reduces cancer mortality, which is a major cause of death in North America.'
The fact that leukemia risk is sensitive function of dose rate and not just dose means that most of the radiation monitors for workers in the nuclear industry (which merely record total dose, i.e. integrated dose rate, and don't show the mean rate at which the dose was received at) is almost useless for assessing risks.
This has been known and published widely since 1962:
"... Mole [R. H. Mole, Brit. J. Radiol., v32, p497, 1959] gave different groups of mice an integrated total of 1,000 r of X-rays over a period of 4 weeks. But the dose-rate - and therefore the radiation-free time between fractions - was varied from 81 r/hour intermittently to 1.3 r/hour continuously. The incidence of leukemia varied from 40 per cent (within 15 months of the start of irradiation) in the first group to 5 per cent in the last compared with 2 per cent incidence in irradiated controls."
All of this evidence is ignored or censored out of mainstream discussions by bigoted politicians, journalists, editors and environmental quangos. So "Health Physics" (which radiation safety is currently known as) isn't really healthy physics anymore, it's instead becoming more of a pseudoscientific exercise in political expediency and ignoring evidence.
Fusion power doesn't look very realistic or safe either, because of the high energy neutrons given off in tritium-deuterium fusion which will turn the structural materials of the entire fusion reactor radioactive quite quickly, since they have a much greater range than the moderated (thermalized) neutrons in a nuclear fission reactor. So neutron-induced activity is a problem with fusion reactors. You have also to compress plasma to enormous pressures to achieve fusion using electrically controlled magnetic fields, which in a commercial fusion reactor producing gigawatts of power, would not exactly have the "fail-safe" safety features of a fission reactor. Any slight upset to the carefully aligned and balanced magnetic fields which are compressing the fusion plasma would potentially turn the fusion reactor into the scene of a H-bomb exposion, complete with radioactive fallout from the neutron-induced activity in the structural materials. This aspect of fusion power isn't hyped very much in the popular media, either. Could it be that the people working on such areas simply don't want their funding to dry up?