Augmented and Modified 1-14-05
Some of the questions that need to be addressed are the following:
An initial and speculative attempt to address these important issues is presented below.
If photons have substance then we can calculate their associated mass by using Planck's formula in combination with that of Einstein. For photons in the visible region, a frequency of about 10^14 cycles per second, this turns out to be about 10*-33 grams. We did not know in 1905 that an electron has a mass of about 10*-27 grams. we know now that this would make the associated mass of such a photon about one millionth that of an electron.
Whether this mass should be considered a part of the rest mass of the electron, or of the mass of the moving electron is a troubling question. Both of these alternatives lead to difficulty. If the mass lost is that from the increased mass of moving particles, i.e. the apparent mass, it is not 'real' in that it does not affect the rest mass. For example, with X rays it is the additional 'mass' acquired by electrons due to a strong external field which is given up to the photon. If the electron gives up some of its rest mass we need a new concept of matter.
Gamma rays, with frequencies over 10*20 cycles per second arise from within the nucleus and represent a small fraction of the mass of a baryon. The same difficulties apply here. Nuclear reactions may transform, but do not destroy baryons.
The particles which have been 'observed' and which have a 'mass' somewhere between that of an electron and a proton all have 'lifetimes' measured in microseconds or much less. They are called particles because they leave a trail, footprints, which call attention to something. They are also necessary to make current theories work - just as the aether was necessary for the theory of light propagation in the nineteenth century.
Kinetic energy remains coupled to mass and to motion, and, of course as well, the momentum, p, involves both mass and motion. Planck's formula, E = h x f, than implies that even photons have mass, however small, or there could not be any kinetic energy associated with them. (A particle with zero rest mass is not much heavier when in motion.) Similarly, if w is the wavelength, then DeBroglie's relation, w = h/p. can be rewritten w = h/(m x v) and using w = v/f for material particles (or w = c/f for photons if they have some mass), and eliminating h, (since from DeBroglie's relation, h = w x m x v) we get
So we get to E = mc*2 as an approximation for nuclear fragments approaching the speed of light without the use of SRT - without the assumption that mass turns into energy. Quantum physics without the Lorentz transformation and without SRT is not only conceivable, but probably the correct approach to understanding the universe.
So let's suppose the mass of a photon in or near the visible region, say a frequency of 10*14 cycles per second, is about a million times less than an electron. An x-ray quantum would be 100 to 1000 times more massive and a gamma ray, with a frequency of 10*20, could have a mass about equal to that of an electron. That certainly makes it easy for an electron and a positron to join in producing two gamma particles having the same mass but no charge - and no material entity is destroyed! (Parenthetically we should note that neutrinos have about the same frequency as gamma rays, about 10*20, but minimal energy, or mass, in comparison with the .5MeV of gamma rays. They appear to have a speed of c, or better, judging by the results of the 1987 supernova experience, since they were detected on February 23, at about the same time as the visual signal from the supernova explosion was first noticed. Maybe they deserve the title of ‘superlight'. They clearly don't fit under Planck's law. Perhaps, if I can risk a wild conjecture, they are the slowest among a class of messengers to which could belong the even faster gravitons.)
With this line of thought, we are giving up the dubious claim that photons have no mass. We are also giving up c as an absolute and universal constant - but that is a small price to pay if SRT is logically inconsistent, and c is experimentally not constant. That c is evidently not the same for all frequencies was discussed in the chapter on the Lorentz Transformation. The last paragraph of that chapter is repeated here:
Another question that arises is whether c is the limit for all frequencies. Here some results from cosmology are useful. In looking at the time that light reaches us from distant supernovae it is found that ultra violet energy peaks months before the x ray region shows a peak. [See Herbert Friedman, “The Astronomer's Universe”, 1990, p.175.] This indicates that most probably the limit c is reached only for relatively low frequencies and that high frequency radiation has a lower speed limit (radio frequencies may also have a lower speed, but as a result of passing through regions where the vacuum is less than perfect). The velocity difference may only be found in the fifth decimal of c, or even in the sixths of seventh, but that would be enough for some radiation to reach us tens or hundreds of years later than visual radiation, when the origin of the light is many millions, or billions of light years away. What is interpreted to be a neutron star could well be the delayed radiation of a supernova explosion that originated at the same time as the visual radiation but at a slightly slower speed.
Finally, a question that Planck could not have asked, but which should have been asked a few decades later: "Why is kinetic energy proportional to the frequency of radiation?" The light quanta, at different frequencies, do not differ a great deal in their velocity, and since kinetic energy is proportional to mass, we can conjecture that mass is proportional to frequency.
This line of thought also leads back to the idea that mass is conserved in all reactions and transformations. A photon absorbed is mass added. It is also kinetic energy added. But it is the fact that velocity squared is proportional to kinetic energy that gives nuclear reactions their real kick.
Accordingly, and as is generally accepted, no protons, neutrons, or electrons are ever destroyed. Fission used as an explosive is so much more powerful than TNT not because mass is converted to energy, but because the square of the velocity of light (which is close to that attained by emitted fragments of a nucleus) is a million, million times greater than any energy attained by the velocity of matter in an ordinary explosion. That may be the real message in E = mc*2. It is a limiting case of the energy delivered by the disintegration of nuclei. But at the very end a mystery remains. Is there an upper limit to speed, even for light? Only imagination can help at this point, or faith in some greater power that has ordained this without letting us in on the secret. But if I were to guess, it would be that it has something to do with the ratio of masses of electrons and photons. If I throw a ball its speed is limited by my strength and the speed of my arm as it releases the ball. If a billiard ball strikes another, the speed of the struck ball is limited by the mass and speed of the striking ball. So it may be that an electron with a mass a million times larger than a photon, and a speed that is a small fraction of the speed of light, is capable of ejecting a photon at a speed c, but no more. That would give us an upper limit, but would not guarantee that all photons necessarily move at that speed. Perhaps, in that case c is an average over many photons, and the truth is that it is a statistical rather than a hard limit, and a limit applies only when there is at least a minute amount of mass. There is still plenty of work for physicists and philosophers that remains to be done.
In an experiment conducted on a mountain, muons were detected at 6000 ft and measured to have a flux of 550 per hour. Simultaneously, they were measured at 2000 ft and found to have a flux of about 420 per hour. The distance is about 1200 meters and if muons have a half life of about 2 microseconds (in the experiment the assumption is1.56 microseconds, in some texts it is 2 microseconds), they would, at the speed of light travel about 600 meters before half are gone. After 1200 meters only about 1/4, or less, would be left, so no more than about 120 should survive - but we get almost four times that many.
Consequently their half-life must have increased due to their speed. Here is what is wrong with that argument. Muous are created in the upper atmosphere, not in outer space. Assuming a half life of two microseconds, then, of the 550 muons detected at 6000 feet (or 2000 meters), about half would have originated at a height of 2600 meters, or less; about 1/4 would have originated at a height between 2600 and 3200 meters, about 1/8 would have originated at a height between 3200 and 3800 meters, etc. The same argument applies at 2000 feet (about 650 meters). Of the 420 that are detected, about 1/2 originated between 660 and 1260 meters, 1/4 originated between 1260 and 1860 meters and 1/8 originated between 1860 and 2460 meters. So perhaps 1/8 of the 550 muons, about 85, can be expected to survive the journey. The rest of the 420 come from muons created during the hour that the measurements were taking place, in the region of space between 2000 and 6000 feet altitude. Adding 210, 105, and 85 gives us 400. Considering the crudeness of the estimates that is not far from the 420 that were observed at the lower elevation. A simplistic application of the exponential decay law leads to an erroneous inference about time dilation. It is not simply a matter of survival, but the interaction of birth and decay.
An experiment by Frisch and Smith, done in 1963, tried to filter out Muons that did not have a high enough energy. Only those corresponding to an assumed speed of between .9950c and .9954c were to be counted. They had the lower station at about sea level, and observed about 560 per hour at 6000 feet and about 410 per hour at sea level. They believed that by filtering out slower muons with less energy at 6000 feet, they will encounter only those with high energy (and about the same speed) which survive at the lower altitude. They assumed, of course, that such high energy muons are not formed below 6000 feet. In that case they would not need to consider the interaction between creation and decay for altitudes below 6000 feet. They rely on other experiments for this assumption.
Even though the cosmic rays, which give rise to the muons, have traveled through 80% of the matter in the atmosphere, by the time they reach 6000 feet, this does not mean that 100%, or even 80%, have been eliminated. Fewer get through to the lower altitudes, but that would not preclude high energy muons from being formed below 6000 feet. In that case their inferences are not valid. What is really needed is a count of primary cosmic ray protons at 1000 foot increments from sea level to 8000 feet. The ratio between the flux at 2000 feet and that at 4000 feet to the flux at 8000 feet would allow us to more accurate asses the interaction between creation and decay of the resulting muons at lower altitudes.
It should also be noted that these experiments are insensitive to the actual mean life of muons. If for example the mean life were 10 microseconds instead of 2, it would mean that about half the muons detected at 6000 feet, that is 2000 meters, originated between 2000 and 5000 meters, one quarter between 5000 and 8000 meters etc. At 2000 feet, or 660 meters, one half originated between 660 and 3660 meters, etc.. The ratio of 410 to 550 should then mirror the ratio between muons created in the vicinity of 3660 meters to those created in the vicinity of 5000 meters. With a two microsecond half life, the ratio 410 to 550 should correspond to the ratio of cosmic ray fluxes at about 1260 meters and 2600 meters. Unless the two ratios are dramatically different, the experiment will have a difficult time differentiation between a half=life of 2 and a half life of 10 microseconds. Moreover, once it is clear that the Lorentz Transformation is wrong, and that synchronization of clocks on bodies in relative motion is possible, SRT is invalidated; then the shoe is on the other foot. The issue of time dilation must be defended and not assumed. It is up to ‚believers' and reluctant scientists to show why an explanation of these experiments, that does not require time dilation, is invalid. In any case the experimental evidence, still to be obtained, can probably resolve the matter.