### Secret/limited distribution data on survival in Hiroshima and Nagasaki is finally published

*ABOVE:* notice the important words, "RESTRICTED: This document contains information affecting the national defense of the United States within the meaning of the Espionage Act ...". This document was never published; the cut-down "summary" by Oughterson and Warren published as a book (*Medical Effects of the Atomic Bomb in Japan,* McGraw-Hill publishers) in 1956 obfuscated the results by omitting the vital survival data as a function of type of building, the catalogue of photos of buildings where people survived (*vitally needed* to counter-blast the outright evil anti-civil defense lies by CND that Hiroshima was entirely vaporized by thermal flash radiation, as allegedly "proved" by the effects of gasoline burned civilian accident victims and also by selective use of photos of the cinders of wooden houses burned down in the firestorm that developed 30 minutes to 3 hours later, long *after* most of the survivors had *evacuated the buildings*), and the other vital data needed for civil defense, and the few vague comments about the data later made on page 631 in the final chapter of the 1962/64 *Effects of Nuclear Weapons* (after White sent Glasstone a summary) *lacked credible impact since contained no references or detailed supporting data like graphs, photos of buildings in the firestorm area with their human survival data and overpressure, and data plots.* So even when documents which were secret and declassified like this (a pertinent example being the SECRET May 1947 U. S. Strategic Bombing Survey FULL report on survivor surveys concerning the cause of the Hiroshima firestorm, which was the blast simply overturning charcoal breakfast cooking braziers inside paper screen and bamboo furnishing filled overcrowded wooden houses in a hot August city which has seen no rain for 3 weeks, NOT THERMAL RADIATION IGNITION), they have limited distribution, and end up being easily ignored by the same old ranting anti-civil defense delusional biased politicians masquerading as educated scientists who permitted Hitler to get away with murder in the 1930s by the act of exaggerating high explosive and gas weapons effects into "end of the world scare-mongering", and ignoring the efficiency of civil defense countermeasures to destroy the credibility of deterring coercive threats from dictators and thugs.

The important survival data was collected from August 1945 to 1951. Apart from leukemia risks which were doubled from the natural risk in the people surviving the largest survivavable radiation doses, long-term effects proved statistically insignificant compared to background cancer rates: "the number of cancers (from 1950 to 2000 for leukemia deaths and from 1958 to 1998 for solid cancer occurrence) in the Life Span Study (LSS) A-bomb survivors in relation to radiation dose ... Overall, nearly half of leukemia deaths and about 10% of solid cancers are attributable to radiation exposure." Out of 49,204 survivors monitored for leukemia, there were 204 leukemias of which 94 are above the natural leukemia rate, and out of 44,635 survivors monitored for non-leukemia or "solid" cancer occurrences there were 7,851, of whom only 848 were "excess" attributed to radiation, so bottom line is that long-term cancer risks were small or in the case of leukemia, equal, to natural cancer risks (Table below).

The short-term survival data in Joint Commission Report, volume 6, document NP-3041, by Ashley W. Oughterson, et al., Medical Effects of Atomic Bombs, U. S. Army Institute of Pathology.

The Report of the Joint Commission for the Investigation of the Effects of the Atomic Bomb in Japan; Volume VI.

by Ashley W. Oughterson, Henry L. Barnett,George V. LeRoy, Jack D. Rosenbaum, Averill A. Liebow, B. Aubrey Schneider, and E. Cuylar Hammond.

NP-3041

Office of Air Surgeon

July 6, 1951*Above: *the Hiroshima and Nagasaki data-set formed the nucleus of a database that was expanded to include 35,099 case histories (24,044 at Hiroshima and 11,055 at Nagasaki), leading to Dirkwood Corporation's correlation of peak overpressure with mortality from all causes for various, above (source: L. Wayne Davis, *Prediction of Urban Casualties and the Medical Load from a High-Yield Nuclear Burst*, Dirkwood paper DC-P-1060, secret or "limited distribution"), and contrary to pro-terrorist propaganda falsehoods circulated by Moscow and its evil Pugwash anti-civil defense propaganda "consensus of expert scientific opinion (no facts needed)"-supporting hypocritical anti-civil defense-in-the-West Nobel Peace Laureate Joseph Rotblat, the time-to-death was investigated and 99% of the deaths in Hiroshima occurred within about 85 days (the white blood cell count is most depressed at 30 days after exposure to radiation, which is the time of maximum radiation mortality; blast and thermal effects and very high radiation doses causing gastro-intestinal or cerebral radiation syndrome cause death more rapidly):

(Note: the pages above need proof-reading and correcting, including rewrites. I will do this ASAP)*Off-topic.* Interesting and funny (maybe) quotation about criticisms innovators receive and the relatively poor backing from "independent referees" when contracted to "sort out" a dispute, from a 1987 interview by magnetic dipole EMP discoverer Dr Conrad Longmire, who died in 2010:

Longmire:

Nothing that I was involved in. They were, DNA did hire them way back in the 1970s, early seventies, there was a fellow at the RAND Corporation, this is after the RDA physics group left. His name was Cullen Crane, who, I don't know if you've ever heard of him—well, anyway, this fellow was saying that EMP is a hoax. These guys are either crazy or they're doing it to, you know, perpetuate their salaries. And so the Jason group got tasked by DNA to look into this. Now, in this case, in my opinion, the Jason group didn't do a very good job, because instead of reading the reports and trying to settle the argument, they started out from scratch and first did their own version of EMP, and at least, I didn't think that was necessary at the time. But I don't know, it might have been useful to DNA.

Aaserud:

Of course it's more interesting to do one's own work.

Longmire:

Yes, right. Also, I might say, if they have any faults at all, one of them is that they're not very good as historians. They do not, you know, when they begin to look into something, they don't go back and make sure that they've read all the earlier references and stuff like that. But you don't expect physicists to be your formally good historians.

Longmire was spot on. They don't know history because they don't care about history too much, thinking physics a separate subject from boring old history. Which is why they keep making the same mistakes as foolish predecessors, by using "gut instinct/intuition" to dismiss new ideas which contradict existing interpretations, in place of unbias analysis of

*all*the options. Intuition is useful for objective and constructive work, but is dismally stupid when used to "justify" ignoring a new idea which is having a hard time any just because it is new. Intuition is easily confused with herd instincts. I'm going to include a concluding "crying about spilt milk" section in my paper on what Newton could and should have done with Fatio's gravity mechanism circa 1790 A.D., when Newton could (if he knew

*G*which of course he didn't really know or even name, since he used Euclidean-type geometric analysis to prove everything in Principia, and that symbol it came from Laplace long after), have predicted the acceleration of the universe from applying his 2nd and 3rd laws of motion plus other Newtonian physics insights to improve and rigorously evaluate the gravity mechanism. Of course, we're still stuck in a historical loop where any mention of the facts is dismissed by saying Maxwell and Kelvin disproved a gravity mechanism by proving that

*onshell*matter like gas would slow down planets and heat them up, etc. Clearly this is not applicable to experimentally validated Casimir

*off-shell*bosonic radiations, for example, and in any case quantum field theory's well validated interaction picture version of quantum mechanics (with wavefunctions for paths having amplitudes exp(iS), representing different interaction paths) suggests that fundamental interactions are mediated by off-shell field quanta. The Maxwell/Kevlin and other "disproofs" of graviton exchange are wrong because they implicitly assume gravitons are onshell, an assumption which, if true, would also destroy other theories. It's not true. E.g. he Casimir zero point electromagnetic radiation which pushes metal plates together does not cause the earth to slow down in its orbit or speed up.

The use of a disproved and fatally flawed classical "no-go" theorem to "disprove" a new theory is exactly what holds up physics for centuries. E.g., Rutherford objected at first to Bohr's atom on the basis that the electron orbiting the nucleus would have centripetal acceleration, causing it to radiate continuously and disappear within a fraction of a second. We now know that the electron doesn't have that kind of classical Coulomb-law attraction to the nucleus, because the field isn't classical but is quantum, i.e. discrete field quanta interactions occur. This is validated by "quantum tunnelling", where you can statistically get a particle to pass through a classically-forbidden "Coulomb barrier" by chance: instead of a constant "barrier" there is a stream of randomly timed field quanta (like bullets in this respect) and there is always some chance of getting through by fluke. You don't need to have a more fancy explanation than that, because the available mathematics (which gets into trouble with Haag's theorem) doesn't prove a more fancy explanation. The simplest theory which fits the experimental facts is adequate and preferable to everyone sensible. [Path integrals using a real-only amplitude, cos(iS), in place of the complex exp(iS) are also a topic of my paper. The exp(iS) factor comes from Schroedinger's time-dependent equation, which contains i, the complex number, because Schroedinger had read the idea in Weyl's paper on a gauge theory of quantum gravity, which had been inspired by Hilbert's and Einstein's Lagrangian for general relativity. London showed that Weyl's complex exponential phase factor can be applied to atoms directly, but Schroedinger had already taken the idea to mind. The "stationary" states of an electron are then the real solutions to an equation that contains also a complex conjugate. E.g., exp(iS) = cos(iS) + i*sin(iS) (Euler's equation) gives periodic real, discrete solutions, exp(i0) = 1 for instance, which is useful for modelling discrete energy levels in the atom. However, it's just a model. Does the electron exist only in "imaginary space" on an Argand diagram when it jumps between states? I doubt it. The problem is severe because Bell's theorem - used with experiments to "discredit" hidden variables in QFT and this to "credit" ESP-fairy entanglement "interpretations" instead - is based on 1st quantization Schroedinger wavefunction analysis as a foundational assumption. If you drop the complex plane, you don't lose an angle on an Argand diagram, because no such angle exists; the real world is resultant arrow which is the path of least i.e., S = ZERO, and exp(i*{ZERO}) = 1, so the least action "sum of histories" resultant arrow direction is on the real plane. The imaginary plane is not just imaginary but unnecessary because replacing exp(iS) with Euler's real component of it, cos(iS), does all the work we need it to do in the real physics of the path integral (see Feynman's 1985 book "QED" for this physics done with arrows on graphs, without any equations): all you're calculating from path integrals are scalars for least action magnitudes (resultant arrow lengths,

*not*resultant arrow directions; since as said the resultant arrow direction is horizontal, in the real plane, or, you don't get a cross-section of 10i barns!). As Feynman says, Schroedinger's equation came from the mind of Schroedinger (actually due to Weyl's idea), not from experiment. Why not replace exp(iS) with cos(iS) for phase amplitudes? It gets rid of complex Fock and Hilbert spaces and Haag's interaction picture problem which is due to renormalization problems in this complex space (it hopefully also gets rid of arrogant deluded "mathematicians" who don't know physics but are good at PR), and it makes path integrals simple and understandable!

**Some additional amplifying comments about the post above:**

When using exp(iS) you're adding in effect a series of unit length arrows with variable directions on an Argand diagram to form the path integral. This gives, as stated, two apparent resultant arrow properties: direction and length. A mainstream QFT mathematician's way of thinking on this is therefore that this must be a vector in complex space, with direction and magnitude. But it's not physically a vector because the path integral must always have DIRECTION on the real plane due to the physical principle that the path integral follows the

*direction of the path of least action.*

The confusion of the mainstream QFT mathematician is to confuse a vector with a scalar here. A "vector" which always has the same direction is physically equivalent to a scalar. You can plot, for example, a "two dimensional" graph of money in your bank balance versus time: the line will be a zig-zag as withdrawals and deposits occur discretely, and you can draw a resultant arrow between starting balance and final balance, and the arrow will appear to be a vector. However, in practice it is adequate to treat money as a scalar, not a vector. Believing that the universe is intrinsically mathematical in a complicated way is not a good way to learn about nature, it is biased.

Instead of having unit arrows of varying direction and unit length due to a complex phase factor exp(iS), we have a real world phase factor of cos(iS) where each contribution (path) in the path integral (sum of paths) has fixed direction but variable length. This makes it a scalar, removing Foch space and Hilbert space, and reducing physics to the simplicity of a real path integral analogous to the random (Monte Carlo) statistical summing of Brownian motion impacts, or better, long-wave 1950s and 1960s radio multipath (sky wave) interference.

For long distance radio prior to satellites, long wavelength (relatively low frequency, i.e. below UHF) was used so that radio waves would be reflected back by "the" ionosphere tens of kilometres up, overcoming the blocking by the earth's curvature and other obstructions like mountain ranges. The problem was that there was no single ionosphere, but a series of conductive layers (formed by different ions at different altitudes) which would vary according to the earth's rotation as the ionization at high altitudes was affected by UV and other solar radiations.

So you got "multipath interference", with some of the radio waves from the transmitter antenna being reflected by different layers of the ionosphere and being received having travelled paths of differing length by a receiver antenna. E.g., a sky wave reflected by a conducting ion layer 100 km up will be longer than one reflected by a layer only 50 km up. The two sky waves received together by the receiver antenna are thus out of phase to some extent, because the velocity of radio waves is effectively constant (there is a slight effect of the air density which slows down light, but this is a trivial variable in comparison to the height of the ionosphere).

So what you have is a "path integral" in which "multipath interference" causes a bad reception under some conditions. This is a good starting point to checking what happens in the "double-slit experiment". Suppose, for example, you have two radio waves received out of phase. What happens to the "photon"? Does "energy conservation" cease to hold? No. We know the answer: the field goes from being observable (i.e. onshell) to being offshell and invisible, but still there. It's hidden from view unless you do the Aharonov–Bohm experiment, which proves that Maxwell's equations in their vector calculus form are misleading (Maxwell ignores "cancelled" field energy due to superimposed fields of different direction or sign, which still exists in offshell energy form, a hidden field).

Notice here that a radiowave is a very good analogy because the "phase vectors" aren't "hidden variables" but measurable electric and magnetic fields. The wavefunction, Psi, is therefore not a "hidden variable" with radio waves, but is say electric field

*E*measured in volts/metre, and the energy density of the field (Joules/m

^{2}) is proportional to its square, "just as in the Born interpretation for quantum mechanics". Is this just an "analogy", or is it the deep reality of the whole of QFT? Also, notice that radio waves appear to be "classical", but are they on-shell or off-shell? They are sometimes observable (when

*not*cancelled in phase by another radio wave), but they can be "invisible" (yet still exist in the vacuum as energy and thus gravitational charge) when their fields are superimposed with other out-of-phase fields. In particular, the photon of light is supposed to be onshell,

*but the electromagnetic fields "within it" are supposedly (according to QED, where all EM fields are mediated by virtual photons) propagated by off-shell photons*. So the full picture is this: every charge in the universe is exchanging offshell radiations with every other charge, and these offshell photons constitute the basic fields making up "onshell" photons. An "onshell" (observable) photon must then be a discontinuity in the normal exchange of offshell field photons. For example, take a situation where two electrons are initially "static" relative to one another. If one then accelerates, it disrupts the established steady state equilibrium of exchange of virtual photons, and this disruption is a discontinuity which is conventionally interpretated as a "real" or "onshell" photon.