Friday, June 30, 2006

Cancer suppression by radiation, a censored physical mechanism



(Inspired by email correspondence with a survivor of 100 nuclear tests, Jack Reed)

This model is based on the review, in 2005, of the mechanism behind the Hiroshima and Nagasaki data at low doses, by L. E. Feinendegen in his paper, 'Evidence for beneficial low level radiation effects and radiation hormesis' in the British Journal of Radiology, vol. 78, pp. 3-7:

'Low doses in the mGy range [1 mGy = 0.1 rad, since 1 Gray = 1 Joule/kg = 100 rads] cause a dual effect on cellular DNA. One is a relatively low probability of DNA damage per energy deposition event and increases in proportion to the dose. At background exposures this damage to DNA is orders of magnitude lower than that from endogenous sources, such as reactive oxygen species. The other effect at comparable doses is adaptive protection against DNA damage from many, mainly endogenous, sources, depending on cell type, species and metabolism. Adaptive protection causes DNA damage prevention and repair and immune stimulation. It develops with a delay of hours, may last for days to months, decreases steadily at doses above about 100 mGy to 200 mGy and is not observed any more after acute exposures of more than about 500 mGy. Radiation-induced apoptosis and terminal cell differentiation also occur at higher doses and add to protection by reducing genomic instability and the number of mutated cells in tissues. At low doses reduction of damage from endogenous sources by adaptive protection maybe equal to or outweigh radiogenic damage induction. Thus, the linear-no-threshold (LNT) hypothesis for cancer risk is scientifically unfounded and appears to be invalid in favour of a threshold or hormesis. This is consistent with data both from animal studies and human epidemiological observations on low-dose induced cancer. The LNT hypothesis should be abandoned and be replaced by a hypothesis that is scientifically justified and causes less unreasonable fear and unnecessary expenditure.'

Since about 1979, the role of protein P53 in repairing DNA breaks has been known. At body temperature (37 °C) each strand of DNA naturally breaks a couple of times a minute (200,000 times a day, with 5% being double-strand breals, i.e., the whole double helix snapping), presumably due mainly to Brownian motion of molecules in the warm water-based cell nucleus. Broken strands are then then reattached by DNA repair mechanisms such protein P53.

Many cancers are associated with P53 gluing 'back together' the wrong strand ends of DNA. This occurs where several breaks occur, and pieces of DNA drift around before being repaired the wrong way around, which either kills the cell, causes no obvious damage (if the section of DNA is junk), or sets off cancer by causing a proliferating defect. Other cancers obviously result when the P53 molecules become defective, and this is an example of where cancer risk can have an hereditary component.

The linear non-threshold (LNT) anti-civil defence dogma results from ignoring the vitally important effects of the dose rate on cancer induction, which have been known and published in papers by Mole and a book by Loutit for about 50 years; the current dogma which is falsely based on merely the total dose, thus ignoring the time-dependent ability of protein P53 and other to cancer-prevention mechanisms to repair broken DNA segments. This is particularly the case for double strand breaks, where the whole double helix gets broken; the repair of single strand breaks is less time-dependent because there is no risk of the broken single strand being joined to the wrong end of a broken DNA segment. Repair is only successful in preventing cancer if the broken ends are rapaired correctly before too many unrepaired breaks have accumulated in a short time; if too many double strand breaks occur quickly, segments can be incorrectly 'repaired' with double strand breaks being miss-matched to the wrong segments ends, possibly inducing cancer if the resulting somatic cell can then undergo division successfully without apoptosis.

Official radiation dose-risk models are not well respected by everyone: the mathematical models used are not mechanism-based theoretically supported equations, just assumed empirical laws (with little empirical support for the whole range of dose-risk data) such as the linear dose response model, the threshold + linear, the quadratic at low doses (switching to linear response at high doses), etc.

What there should be is a proper theory which itself produces the correct mathematical formula for dose response. This will allow the massive amount of evidence to be properly fitted to the law, and for anomalies to be resolved by theory and by further studies precisely where needed.

There is plenty of data for populations living at different radiation levels (cities at different altitudes, with different cosmic background exposures, etc.) showing that at say 0.1-1 cGy/year (0.1-1) rad/year the effect of external radiation is to suppress the cancer risk.

Several papers by experts on this suggest that small amounts of external radiation just stimulate the P53 DNA repair mechanism to work faster, which over-compensates for the radiation and reduces the overall cancer risk.

This is also clear for low doses at Hiroshima and Nagasaki, although you have the problem of what 'relative biological effectiveness (RBE)' you use for neutrons (the neutron RBE factor is 20-25), as compared to gamma radiation. Traditionally, the dose equivalent was expressed in rem or cSv, where: 1 rem = 1 cSv = {RBE factor} * {dose in rads or cGy}. However, if we just talk about gamma effects doses in cGy, as the RERF does, we can keep using dose units of cGy (1 Gy = 1 Joule/kg absorbed energy, hence 1 cGy = 0.01 J/kg).

Beyond a prompt dose (high dose rate, received over 1 minute as initial nuclear radiation from a bomb) of about 20 cGy (20 rads), or a chronic dose rate of say possibly 2 cGy (rads)/year, there is definitely be an increased risk of cancer. The risks from intake of alpha and high energy beta emitters are probably dangerous at all doses because they are 'high' LET (linear energy transfer), depositing a lot of energy in the small amount of tissue that stops them. Gamma rays are officially 'low' LET radiation. Alpha and beta (internal emitter) doses go more nearly with the linear model, since there is both evidence and mechanism by which they can cause net damage even as individual particles.

However, the evidence and mechanism for gamma rays suggests that there is a reduction of cancer risks at low doses. The problem is how to reduce the data to a formula which has a mechanism to support it?

It would have to take account of the beneficial effect of low doses, while still supporting the net increased cancer risks from very high doses for people within 500 m of ground zero in Hiroshima and Nagasaki.

The linear dose response law is risk R = cD where D is dose and c is constant, but you can immediately see a problem in this equation: it isn't natural because at a dose above the value D/c, the risk R will become greater than 100%, which is nonsense.

So putting such a formula into a computer code will produce garbage: if you expose 1 person to a dose of 2D/c rads, the number of people expected to die is 2 persons!

The Health Physics community avoids this kind of thing by a methodology of calculating cancer risks from the 'collective dose' measured in person-rads, ie., the 100 person-rads is a dose of 1 rad each to a group of 100 people, or 10 rads each to 10 people or 100 rads to 1 person. This relies entirely on the linear response law. It is bogus when only a few people out of a community receive high radiation doses, because it creates a fictitious average and so will obviously exaggerate the number of cancers.

What they should do is to correct the linear law R = cD to avoid overkill of a single person. The naturally occurring correct law is something like R = 1 - exp(-cD).

When cD is small, this reduces to R = 1 - (1 - {-cD}) = cD, which is the simple linear response. At high doses, it reduces to the correct limit R = 1.

This still leaves the problem of how to take account of the health benefit (cancer risk suppression) from stimulation of P53 DNA repair at low doses.

The full correct law must not go to zero risk at zero dose, but to the natural cancer risk for the type of cancer in question when there is no radiation (which will be cancer caused mainly by DNA breaks due to random Brownian motion thermal agitation at body temperature, which is quite hot).

So the risk at zero dose should be R = a, where a is the natural risk of cancer for the relevant time interval. As radiation dose increases slightly, P53 is stimulated to check and repair breaks more rapidly than the low dose rate of radiation can cause breaks, so n falls. It can't fall in a completely linear way, because the cancer risk can only fall from the natural level (at zero dose rate) toward zero at low doses. It can't become negative. So this constraint tells you it is an exponential fall in cancer risk with increasing dose:

R = a*exp(-bD).

Where a is an empirical constant equal to the natural cancer risk (due to body temperature on DNA) if there is zero radiation exposure for the time interval the radiation is assumed to be received for. When D = 0, it follows from the model that R = a, and when D tends to infinity, R tends towards zero. So this natural model is the correct formula for the suppression in cancer risk due to radiation at low doses!

However, at much larger doses in a unit time interval (i.e., higher dose rates), the P53 repair mechanism becomes overloaded because the DNA is breaking faster than P53 can repair it, so the cancer risk then starts increasing again. We can include this by adding the regular non-threshold law (which is linear for low doses) to the beneficial exponential law:




This is the correct theoretical law for cancer risk due to radiation dose D received in a fixed interval. For different types of radiation, constant a is always the natural cancer risk (for zero radiation exposure) for the same interval of time that the radiation is received over, but the other constants may take different values depending on the radiation type internal alpha high LET radiation, beta, or low LET gamma rays). It is important to investigate the values of these constants for using the model for both doses and dose rates.

Experimental data can be readily used to determine all the constants, if we can get the full facts out of the Radiation Effects Research Foundation (RERF).

I think that this simple mathematical model should replace the linear radiation-risk response law. It certainly looks more complicated, but it contains the factual physical dynamics.

This law is general enough to be useful to analyse all of the radiation effects data in existence. I'm planning to do the analysis and publish graphical fits of the radiation response curves to all the data from Japan, the nuclear industry, populations exposed to different amounts of background, etc.



Above: this ABCD model looks good.

WHAT IS NUKEGATE? The Introduction to "Nuclear Weapons Effects Theory" (1990 unpublished book), as updated 2025

R. G. Shreffler and W. S. Bennett, Tactical nuclear warfare , Los Alamos report LA-4467-MS, originally classified SECRET, p8 (linked HE...