Pages

Friday, June 30, 2006

Cancer suppression by radiation, a censored physical mechanism



(Inspired by email correspondence with a survivor of 100 nuclear tests, Jack Reed)

This model is based on the review, in 2005, of the mechanism behind the Hiroshima and Nagasaki data at low doses, by L. E. Feinendegen in his paper, 'Evidence for beneficial low level radiation effects and radiation hormesis' in the British Journal of Radiology, vol. 78, pp. 3-7:

'Low doses in the mGy range [1 mGy = 0.1 rad, since 1 Gray = 1 Joule/kg = 100 rads] cause a dual effect on cellular DNA. One is a relatively low probability of DNA damage per energy deposition event and increases in proportion to the dose. At background exposures this damage to DNA is orders of magnitude lower than that from endogenous sources, such as reactive oxygen species. The other effect at comparable doses is adaptive protection against DNA damage from many, mainly endogenous, sources, depending on cell type, species and metabolism. Adaptive protection causes DNA damage prevention and repair and immune stimulation. It develops with a delay of hours, may last for days to months, decreases steadily at doses above about 100 mGy to 200 mGy and is not observed any more after acute exposures of more than about 500 mGy. Radiation-induced apoptosis and terminal cell differentiation also occur at higher doses and add to protection by reducing genomic instability and the number of mutated cells in tissues. At low doses reduction of damage from endogenous sources by adaptive protection maybe equal to or outweigh radiogenic damage induction. Thus, the linear-no-threshold (LNT) hypothesis for cancer risk is scientifically unfounded and appears to be invalid in favour of a threshold or hormesis. This is consistent with data both from animal studies and human epidemiological observations on low-dose induced cancer. The LNT hypothesis should be abandoned and be replaced by a hypothesis that is scientifically justified and causes less unreasonable fear and unnecessary expenditure.'

Since about 1979, the role of protein P53 in repairing DNA breaks has been known. At body temperature (37 °C) each strand of DNA naturally breaks a couple of times a minute (200,000 times a day, with 5% being double-strand breals, i.e., the whole double helix snapping), presumably due mainly to Brownian motion of molecules in the warm water-based cell nucleus. Broken strands are then then reattached by DNA repair mechanisms such protein P53.

Many cancers are associated with P53 gluing 'back together' the wrong strand ends of DNA. This occurs where several breaks occur, and pieces of DNA drift around before being repaired the wrong way around, which either kills the cell, causes no obvious damage (if the section of DNA is junk), or sets off cancer by causing a proliferating defect. Other cancers obviously result when the P53 molecules become defective, and this is an example of where cancer risk can have an hereditary component.

The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.

Official radiation dose-risk models are not well respected by everyone: the mathematical models used are not mechanism-based theoretically supported equations, just assumed empirical laws (with little empirical support for the whole range of dose-risk data) such as the linear dose response model, the threshold + linear, the quadratic at low doses (switching to linear response at high doses), etc.

What there should be is a proper theory which itself produces the correct mathematical formula for dose response. This will allow the massive amount of evidence to be properly fitted to the law, and for anomalies to be resolved by theory and by further studies precisely where needed.

There is plenty of data for populations living at different radiation levels (cities at different altitudes, with different cosmic background exposures, etc.) showing that at say 0.1-1 cGy/year (0.1-1) rad/year the effect of external radiation is to suppress the cancer risk.

Several papers by experts on this suggest that small amounts of external radiation just stimulate the P53 DNA repair mechanism to work faster, which over-compensates for the radiation and reduces the overall cancer risk.

This is also clear for low doses at Hiroshima and Nagasaki, although you have the problem of what 'relative biological effectiveness (RBE)' you use for neutrons (the neutron RBE factor is 20-25), as compared to gamma radiation. Traditionally, the dose equivalent was expressed in rem or cSv, where: 1 rem = 1 cSv = {RBE factor} * {dose in rads or cGy}. However, if we just talk about gamma effects doses in cGy, as the RERF does, we can keep using dose units of cGy (1 Gy = 1 Joule/kg absorbed energy, hence 1 cGy = 0.01 J/kg).

Beyond a prompt dose (high dose rate, received over 1 minute as initial nuclear radiation from a bomb) of about 20 cGy (20 rads), or a chronic dose rate of say possibly 2 cGy (rads)/year, there is definitely be an increased risk of cancer. The risks from intake of alpha and high energy beta emitters are probably dangerous at all doses because they are 'high' LET (linear energy transfer), depositing a lot of energy in the small amount of tissue that stops them. Gamma rays are officially 'low' LET radiation. Alpha and beta (internal emitter) doses go more nearly with the linear model, since there is both evidence and mechanism by which they can cause net damage even as individual particles.

However, the evidence and mechanism for gamma rays suggests that there is a reduction of cancer risks at low doses. The problem is how to reduce the data to a formula which has a mechanism to support it?

It would have to take account of the beneficial effect of low doses, while still supporting the net increased cancer risks from very high doses for people within 500 m of ground zero in Hiroshima and Nagasaki.

The linear dose response law is risk R = cD where D is dose and c is constant, but you can immediately see a problem in this equation: it isn't natural because at a dose above the value D/c, the risk R will become greater than 100%, which is nonsense.

So putting such a formula into a computer code will produce garbage: if you expose 1 person to a dose of 2D/c rads, the number of people expected to die is 2 persons!

The Health Physics community avoids this kind of thing by a methodology of calculating cancer risks from the 'collective dose' measured in person-rads, ie., the 100 person-rads is a dose of 1 rad each to a group of 100 people, or 10 rads each to 10 people or 100 rads to 1 person. This relies entirely on the linear response law. It is bogus when only a few people out of a community receive high radiation doses, because it creates a fictitious average and so will obviously exaggerate the number of cancers.

What they should do is to correct the linear law R = cD to avoid overkill of a single person. The naturally occurring correct law is something like R = 1 - exp(-cD).

When cD is small, this reduces to R = 1 - (1 - {-cD}) = cD, which is the simple linear response. At high doses, it reduces to the correct limit R = 1.

This still leaves the problem of how to take account of the health benefit (cancer risk suppression) from stimulation of P53 DNA repair at low doses.

The full correct law must not go to zero risk at zero dose, but to the natural cancer risk for the type of cancer in question when there is no radiation (which will be cancer caused mainly by DNA breaks due to random Brownian motion thermal agitation at body temperature, which is quite hot).

So the risk at zero dose should be R = a, where a is the natural risk of cancer for the relevant time interval. As radiation dose increases slightly, P53 is stimulated to check and repair breaks more rapidly than the low dose rate of radiation can cause breaks, so n falls. It can't fall in a completely linear way, because the cancer risk can only fall from the natural level (at zero dose rate) toward zero at low doses. It can't become negative. So this constraint tells you it is an exponential fall in cancer risk with increasing dose:

R = a*exp(-bD).

Where a is an empirical constant equal to the natural cancer risk (due to body temperature on DNA) if there is zero radiation exposure for the time interval the radiation is assumed to be received for. When D = 0, it follows from the model that R = a, and when D tends to infinity, R tends towards zero. So this natural model is the correct formula for the suppression in cancer risk due to radiation at low doses!

However, at much larger doses in a unit time interval (i.e., higher dose rates), the P53 repair mechanism becomes overloaded because the DNA is breaking faster than P53 can repair it, so the cancer risk then starts increasing again. We can include this by adding the regular non-threshold law (which is linear for low doses) to the beneficial exponential law:




This is the correct theoretical law for cancer risk due to radiation dose D received in a fixed interval. For different types of radiation, constant a is always the natural cancer risk (for zero radiation exposure) for the same interval of time that the radiation is received over, but the other constants may take different values depending on the radiation type internal alpha high LET radiation, beta, or low LET gamma rays). It is important to investigate the values of these constants for using the model for both doses and dose rates.

Experimental data can be readily used to determine all the constants, if we can get the full facts out of the Radiation Effects Research Foundation (RERF).

I think that this simple mathematical model should replace the linear radiation-risk response law. It certainly looks more complicated, but it contains the factual physical dynamics.

This law is general enough to be useful to analyse all of the radiation effects data in existence. I'm planning to do the analysis and publish graphical fits of the radiation response curves to all the data from Japan, the nuclear industry, populations exposed to different amounts of background, etc.



Above: this ABCD model looks good.

3 comments:

  1. From: Jack Reed
    To: Nigel B. Cook
    Sent: Monday, June 26, 2006 6:08 PM
    Subject: Blogs and memories


    Hi Nigel, I was amazed last night when I got around to visiting your blog site. Obviously, you're no beginner at studying nuclear weapons and technology. Much of the stuff there I was not particularly familiar with, but several items brought back old memories and faces. Thanks. Is your intent to put together a book?

    Your collection of info on the Japanese and Rongelap radiation survivors was interesting to me, having had some accidental exposures totalling around 64 R, with only 4 R on an official film badge. A few years ago United States support for the Japanese survivor studies was cut off, to my chagrin. Later I inferred the reason. One of their reports claimed that of 47,000 being tracked after 40-100 R exposures, they were now dying - at one-quarter the rate of their unexposed controls. My conclusion: politically unacceptable to the American anti-nukes. Now, later figures show 3/4 of my veteran colleagues of WW-II are dead, so we (40-100 R) have been "vaccinated". Neat!

    The other stong recollection was about that Teak cloud. Some twenty minutes after the shot, I spotted a white "smoke ring" in the NW sky, from the deck of the carrier USS Boxer. Back to my home office, I did the calculation of the closest point where it could be in the sun (01:00 Johnston Island time). I haven't easy access to my old files and forget the exact numbers, but that required a SE wind of around 1500 knots. My colleague, Hugh Church, then did the calculations for the hydrostatic winds around the hot, low pressure area under the direct sun, and came up with nearly the same wind direction and speed. We submitted these two reports to the J. Geophysical Research, and received prompt rejection because "everyone knew that the circulation at those ionospheric altitudes was dictated by the electric field." Some twenty years later, your Sir James Lighthill published a report based on neutral particle flow at such altitudes, because only a tiny fraction of the air was ionized and of no consequence. Hugh and I never got around to re-submitting our articles, but I did get to meet Lighthill at an Oxford meeting in 1985, and told him that chuckler. And finally, in 2003, I got to face a 93-year-old James Peoples, later-long-time-editor of Science Magazine and previously editor of the JGR. His response was "I am a chemist, and I had to rely on my advisors for such problems". More chuckles.

    Straight Ahead, Jack W. Reed
    And reed that boweth down to euery blaste. Chaucer, 1385

    I believe in getting into hot water. I think it keeps you clean. G.K.Chesterton


    ----- Original Message -----
    From: Nigel B. Cook
    To: Jack Reed
    Sent: Sunday, June 25, 2006 3:29 AM
    Subject: Blast yield estimates controversy QQQQ


    Hi Jack,

    Many thanks for these comments on using the EMP to give the time of detonation in 1955 Teapot tests. I've been reading the sanitized report ITR-1660-(SAN), "Operation Hardtack: Preliminary Report, Technical Summary of Military Effects Programs 1-9", DASA, Sandia Base, Albuquerque, New Mexico, 23 September 1959, sanitized version 23 February 1999.

    It contains a lot of blast data including a plot of all the Pacific data for very low overpressure blast, in Fig 6.13 on page 287, and also seems to refer to a report of yours concerning thermal radiation effects on page 453:

    J. W. Reed and others, "Thermal Radiation from Low-Yield Bursts", Project 8.8, Operation Hardtack, ITR-1675, January 1959, Air Force Cambridge Research Centre, Laurence G. Hanscom Field, Bedford, Massachusetts, Secret Restricted Data.

    One issue with thermal radiation is that some reports say it varies with yield. Did you find any evidence of this? Glasstone 1962/64 suggests that the thermal yield fraction for a surface burst varies from 1/7 or 14% for Nevada tests to 1/4 or 25% for Pacific tests. Brode's 1968 RAND Corp report on computer simulations indicates that the thermal yield fraction theoretically increases with bomb yield.

    The fireball radius at second thermal brightness, when most radiation is emitted, scales up more rapidly (radius proportional to W^0.4) with yield than the blast pressure that greates nitrogen dioxide that shields some of the thermal radiation (radius proportional to W^1/3). Therefore, you would expect nitrogen dioxide in the shock wave to filter out more of the thermal radiation in a low yield burst than a high yield burst. This accounts for the rise in predicted thermal yield with yield in air bursts.

    In a surface burst, the cratering action does much the same thing, throwing up a dirt over a radius that scales as the cube-root of yield, so it should have a greater shielding effect on the fireball radiation at maximum brightness at low yields than high ones. So the thermal fraction again should increase with increasing yield.

    It is very hard to see how energy is really used in a nuclear explosion. Glasstone 1950 claimed that only about 1% is used in cratering and ground shock, but that was based on a theory that air blast caused both effects. Brode's 1960 RAND computer simulation showed the figure was 15% because the case shock of the bomb is denser than the air shock and couples far better to the ground. C. E. Adams of USNRDL in a 1958 report calculated that 3% of the energy in the Redwing-Inca tower shot was used for melting the observed mass of fallout. Report DNA 5159F-1 (1979) on simulations of mushroom cloud rise (for the main U.S. Department of Defense DELFIC fallout computer) states on page 12:

    "On the basis of considerable experience with the cloud rise model we take the fraction of explosion energy used to heat air, soil and water to their initial temperatures to be 45% of the joule equivalent of the total yield, W."

    This is a massive amount of energy, essentially the entire blast yield. Hence about 15% of the energy of the surface burst nuclear explosion is used in ground shock and cratering, 45% in mushroom cloud rise, and over 3% in melting fallout. This accounts for 63% of the explosion energy without mentioning nuclear radiation, blast or thermal!

    But if you look at chapter 1 of Glasstone and Dolan, it says 50% is blast, 35% is thermal, 15% nuclear.

    So there is really a total lack of accounting for the energy, and I think Glasstone and Dolan is severely misleading. The blast wave at high overpressures is losing energy very rapidly by heating up the air it engulfs. Therefore, it is a complete fiction to quote 50% blast yield. They get it by subtracting the thermal and nuclear yield from 100%, but clearly the blast wave is continuously leaving warm air behind and thus losing wave energy.

    My calculations at http://glasstone.blogspot.com/2006/03/physical-understanding-of-blast-wave.html indicate that the blast wave begins with all the available fireball energy, 85% of the explosion energy, and then loses energy as the overpressure falls until it has only about 1.09% of the explosion energy when it has become basically a sonic wave at great distances. Therefore, the idea that the blast contains 50% of the energy is entirely misleading. It starts with 85% or so, and then loses energy by thermal radiation from the surface and from leaving behind hot air which rises to form the cloud, etc., until the energy is dow to 1.09% of the total.

    On page 347 of ITR-1660-(SAN), the first American measurement of high altitude EMP was at the 2 kt Yucca test in 1958. The Teak shot EMP measurements failed because the shot went off directly overhead instead of 20 miles downrange due to a missile guidance error. They only measured the beta ionisation which affects radio/radar transmissions for hours, but it is the brief high frequency EMP which causes physical damage to equipment: "Shot Yucca ... [EMP] field strength at Kusaie indicated that deflection at Wotho would have been some five times the scope limits... The wave form was radically different from that expected. The initial pulse was positive, instead of the usual negative. The signal consisted mostly of high frequencies of the order of 4 Mc, instead of the primary lower-frequency component normally received ..."

    Best wishes,
    Nigel


    ----- Original Message -----
    From: Jack Reed
    To: Nigel Cook
    Sent: Sunday, June 25, 2006 4:51 AM
    Subject: Re: Electrical effects at the Nevada Control Point? QQQQ



    Hi Nigel,

    >I cannot understand how the EMP effect was kept secret until 1961.<
    I didn't ever realize that it was, particularly after the high altitude Teak shot in 1958 disrupted communications over most of the Pacific. I was somewhat familiar with it and used the pulse in 1955 to give our distant (200 km) microbarograph operators a shot time on their paper recordings. We simply hung a hundred meters or so of copper wire to catch it, for the vast Nevada areas didn't have very reliable telephone contacts and we hadn't gotten an off-site radio system yet. Before that, in 1953 when I read in Scientific American about ham operators getting radio whistlers from lightning, I suggested in a report (that was classified at the time) that USSR nuclear tests could be easily detected at the southern conjugate in the Indian Ocean. I never got a response from the report, but 3-4 years later a navy man told me they were doing such monitoring with a pair of ships on site rotation from Western Australia. Again, I should have applied for a patent. That leads to another story.

    Also, ca 1953-4, a Scientific American article described extremely bright flashes in laboratory shock tubes filled with argon. As a long-time member of the 188th Fighter Squadron, New Mexico Air National Guard, I published another classified report, "An Aircraft Psychological Defense Device", proposing flare-sized clear plastic shock tubes filled with argon to pop out when an enemy fighter got on your tail. The flash should stun him long enough to make an escape maneuver. The Air Force had different ideas, however, as such dog-fights became obsolete, and used argon shock devices to light up Viet Nam in night-time surveillance.

    Anyway, I wasn't involved in general instrument maintenance in the immediate test areas and didn't pay much attention to their problems nor read all the various reports that came out. My concern was weather dependences - distant airblast and fallout - and I never was much interested in electricity.

    Straight Ahead, Jack W. Reed
    And reed that boweth down to euery blaste. Chaucer, 1385

    I believe in getting into hot water. I think it keeps you clean. G.K.Chesterton


    ----- Original Message -----
    From: Nigel Cook
    To: Jack Reed
    Sent: Wednesday, June 21, 2006 4:04 AM
    Subject: Electrical effects at the Nevada Control Point? QQQQ


    Hi Jack,

    Thank you very much for setting the facts straight on the sound being associated with the first peak, taking a few milliseconds. All of the film tracks of blast waves sound more like hurricane winds, than bangs.

    Since you were at the control point if - assuming it is not secret - you recollect EMP effects. I cannot understand how the EMP effect was kept secret until 1961. ...

    ReplyDelete
  2. Anonymous2:06 pm

    On the naturalness of nuclear radiation and nuclear fission energy see comment at http://riofriospacetime.blogspot.com/2006/09/faint-young-sun.html
    (9 Oct 2006):


    nigel said...
    Hi Louise,

    Thanks for this interesting paradox. Maybe the answer is that life evolved more recently and the claims for evidence of it at 3.4 to 4 thousand million years ago are suspect. "Fossils" claimed to come from that era have been exposed as inorganic minerals. See

    http://www.earth.ox.ac.uk/research/geobiology/geobiology.htm

    In addition, the earth's core was hotter in the past because when you think about it, it has been cooling. A lot of the heat of the core is from radioactivity. Although U238 has only undergone one half life in the earth's life, many supernovae nuclides with short half lifes which are now extinct were around billions of years ago. U235 has a half life of 704,000,000 years and a present abundance of 0.71% in natural uranium. So 4.5 billion years ago, its abundance would have been 0.71% x 2^(4.5/0.704) = 60%.

    This is approaching nuclear weapons grade, so uranium ore seams would have been STEAMING HOT for many millions of years, regardless of the sun's output.

    The Oklo nuclear reactors are proof that this effect is real, see http://www.ocrwm.doe.gov/factsheets/doeymp0010.shtml

    "Fifteen natural fission reactors have been found in three different ore deposits at the Oklo mine in Gabon, West Africa. ... Calculating back to 1.7 billion years ago—the age of the deposits in Gabon—scientists realized that the U-235 there comprised about three percent of the total uranium. ... Once the natural reactors burned themselves out, the highly radioactive waste they generated was held in place deep under Oklo by the granite, sandstone, and clays surrounding the reactors’ areas. Plutonium has moved less than 10 feet from where it was formed almost two billion years ago."

    http://geology.about.com/od/geophysics/a/aaoklo.htm estimates the power

    "About 1.7 billion years ago, to be more precise, a natural deposit of uranium ore was radioactive enough to generate about 100 kilowatts of heat, off and on, for more than a million years. ...

    "Why was uranium so much more radioactive then? That is a deep question that points to the very origin of the solar system. The formation of the planets (and the Sun) from an original cloud of dust and gas apparently was triggered by the explosion of a nearby supernova. Only a supernova can manufacture elements heavier than iron, including uranium. With a half-life of 700 million years, U-235 started out making up nearly half of all uranium when the solar system began some 4560 million years ago. Many shorter-lived radioisotopes that existed in the beginning, like aluminum-26, have become extinct. We know of their former existence by the presence of their decay products in ancient meteorites—nuclear fossils."

    So early evolution could have easily occurred at a uranium deposit with the heat from nuclear fission. This would be what Darwin referred to as a "warm little pond" in his 1871 letter to Joseph Hooker:

    "It is often said that all the conditions for the first production of a living organism are present, which could ever have been present. But if (and Oh! what a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, etc., present, that a protein compound was chemically formed ready to undergo still more complex changes, at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."

    So I don't think that even if the sun was dim all that time in the past, it matters very much for evolution, since there would be hot spots from uranium. Obviously there would also be volcanic eruptions and hot geysers, but generally these would be too sporadic and not constant enough to allow evolution to begin.

    Utilising heat from fission would be more likely to start life than solar energy, which is of course interrupted by the earth's spin. Chemical reactions starting due to solar energy in daylight would "come undone" at night, and so sunlight would be far less likely to trigger evolution than heat from a more constant, reliable source such as nuclear fission power.

    Ice glaciers do flow slowly and produce some effects which in the poor geological record from billions of years ago would not be too dissimilar from effects of water.

    Just some ideas!

    Kind regards,
    Nigel

    9:55 PM

    ReplyDelete
  3. Copy of a comment to

    http://riofriospacetime.blogspot.com/2006/11/direct-route.html

    Hi Kea,

    Lubos Motl is right about the climate change manure because what the doom-sayers of climate change forget is that we're running out of fossile fuels anyhow!

    Before the world is wrecked completely by global warming, we'll have run out of oil, coal, gas (North Sea oil is far more expensive than Arabic supplies because of the costs of oil rigs in the North Sea, and gas - "vapour" I suppose to USA readers to avoid confusion with gas(oline) - is seriously more expensive now that the market is opened up to Europe by a new gas pipeline, and if I buy a new home I'm getting all-electric heating, not the traditional piped gas heating currently used in most of the UK).

    To combat global warming, go nuclear. Nuclear is clean, safe, and it is EVEN ECONOMICAL if you lower radioactive pollution horseshit propaganda and shoot crackpots who claim radiation is lethal.

    Those people don't understand background radiation, or the effect of higher altitudes, air travel, etc on radiation exposure.

    They think "natural" radiation is safe and "artificial" radiation from nuclear power is totally different.

    The Health Physicists who work in the nuclear industry are a load of gormless, feeble, patronising fools who couldn't explain anything to anybody without making it sound like condescending pro-nuclear propaganda, which is why the situation continues.

    They have no idea that physics and maths are well known, and that people can by and large understand radiation. They perpetuate the myths.

    The FIRST thing physicists in the f***ing nuclear industry should put on their posters is the fact that on the Moon the natural radiation is 50 times higher than on earth, 1 mR/hr on Moon compared to 0.02 mR/hr on Earth (the earth's atomsphere shields most of the background radiation, which is 99% protons and alpha particles).

    Then they should give the ACTUAL (not relative!) natural dose rates in different cities with different bedrocks and altitudes above sea level! The thorium rich beach sands of Brazil and India are more radioactive than 90% of the "nuclear waste" from the nuclear industry!

    It is a wide range!!! Then, finally, they should show the lies about low level radiation by plotting the mortality in Hiroshima and Nagasaki and other long-term reliable studies as a function of dose.

    It is true that massive doses severely increase leukemia rates, but it is a lie that small doese do so in proportion, or that other cancers are increases in the same way. Leukemia is a special problem, because the bone marrow is very susceptible to ionising radiation.

    Particularly, high-LET (linear energy transfer) radiations like alpha and beta and also soft x-rays (which cause ionisation by the photoelectric effect) inside the body increase cancer risks, NOT low-LET radiation like gamma rays (unless the dose is really massive).

    People should be aware that the more penetrating the radiation is, the less of it gets stopped by soft tissue in the body, so the LOWER the absorbed dose per unit of fluence!

    The real dangers from low level radiation are from ingesting or inhaling soluble alpha and beta emitters, like radium and strontium-90 respectively, which get deposited in bone and can cause leukemia. Radon gas from the decay of radium is also a massive natural hazard, killing far more people than all the hundreds of megatons of 1950s nuclear tests or Chernobyl.

    Chernobyl showed that iodine-131 causes a short term problem (half like 8 days) after an explosion, because it gets concentrated in milk and then in kid's thyroid glands and can cause lumps (mostly benign). The answer is simple: for the few weeks while milk is contaminated, either put it through an ion-exchanger to remove the iodine-131, or switch to using powdered milk or simply put the cattle in winter barns eating winter feed like hay so that they don't eat contaminated grass! Problem sorted!

    See http://glasstone.blogspot.com/2006/04/fallout-prediction-and-common-sense-in.html for more info on this!

    Regards Hiroshima and Nagasaki long-term radiation effects cover-up, see links on http://www.rerf.or.jp/top/introe.htm:

    Fewer than 1% of victims died due to cancer caused by radiation!!!!!

    The maximum leukemia rate occurred in 1952 and ever since has been declining. There were no genetic effects above the normal rate in offspring of even highly irradiated survivors and cancer risks were carefully studied:

    'The Life Span Study (LSS) population consists of about 120,000 persons who were selected on the basis of data from the 1950 Japanese National Census. This population includes ... atomic-bomb survivors living in Hiroshima or Nagasaki and nonexposed controls. ... all persons in the Master Sample who were located less than 2,500 meters from the hypocenter ATB were included in the LSS sample, with about 28,000 persons exposed at less than 2,000 meters serving as the core. Equal numbers of persons who had been located 2,500-9,999 meters from hypocenter ... were selected to match the core group by age and sex. ... As of 1995, more than 50% of LSS cohort members are still alive. As of the end of 1990, almost 38,000 deaths have occurred in this group, including about 8,000 cancer deaths among the 87,000 survivors. Approximately 430 of these cancer deaths are estimated to be attributable to radiation.'

    More here.

    Sorry to go on, but all this environmental crackpottery just drives me nuts.

    Best,
    nc

    ReplyDelete