Symmetry, Transactions, and the Mechanism of Wave Function Collapse

The Transactional Interpretation of quantum mechanics exploits the intrinsic time-symmetry of wave mechanics to interpret the $\psi$ and $\psi$* wave functions present in all wave mechanics calculations as representing retarded and advanced waves moving in opposite time directions that form a quantum"handshake"or transaction. This handshake is a standing-wave that builds up across space-time to transfer the conserved quantities of energy, momentum, and angular momentum in an interaction. Here we derive a two-atom quantum formalism describing a transaction. Using this, we perform calculations employing the standard wave mechanics of Schr\"odinger to show the transfer of energy from a hydrogen atom in an excited state to a nearby hydrogen atom in its ground state, using the retarded and advanced electromagnetic four-potentials. It is seen that the initial exchange creates a dynamically unstable situation that avalanches to the completed transaction, demonstrating that wave function collapse, considered mysterious in the literature, can be implemented in the Schr\"odinger formalism by including advanced potentials without the introduction of any additional mechanism or formalism.


Introduction
Quantum mechanics (QM) was never properly finished.Instead, it was left in an exceedingly unsatisfactory state by its founders.Many attempts by highly qualified individuals to improve the situation have failed to produce any consensus about either (a) the precise nature of the problem, or (b) what a better form of QM might look like.
At the most basic level, a simple observation illustrates the central conceptual problem: An excited atom somewhere in the universe transfers all of its excitation energy to another single atom, independent of the presence of the vast number of alternative atoms that could have received all or part of the energy.The obvious "photon-as-particle" interpretation of this situation has a one-way symmetry: The excited source atom is depicted as emitting a particle, a photon of electromagnetic energy that is somehow oscillating with angular frequency ω while moving in a particular direction.The photon is depicted as carrying a quantum of energy hω, a momentum hω/c, and an angular momentum h through space, until it is later absorbed by some unexcited atom.The emission and absorption are treated as independent unitary events without internal structure.It is insisted that the only real and meaningful quantities describing this process are probabilities, since these are measurable.The necessarily abrupt change in the quantum wave function of the system when the photon arrives (and an observer potentially gains information) is called arXiv:2006.11365v2 [quant-ph] 2 Jul 2020 "wave function collapse" and is considered to be a mysterious process that the founders of QM found it necessary to "put in by hand" without providing any mechanism. 1 Further, there is the unresolved problem of entanglement.Over the past few decades, many increasingly exquisite "EPR" experiments have demonstrated that multi-body quantum systems with separated components that are subject to conservation laws exhibit a property called "quantum entanglement" [1]: Their component wave functions are inextricably locked together, and they display a nonlocal correlated behavior enforced over an arbitrary interval of space-time without any hint of an underlying mechanism or any show of respect for our cherished classical "arrow of time."Entanglement is the most mysterious of the many so-called "quantum mysteries." It has thus become clear that the quantum transfer of energy must have quite a different symmetry from that implied by this simple "photon-as-particle" interpretation.Within the framework of statistical QM, the intrinsic symmetry of the energy transfer and the mechanisms behind wave function collapse and entanglement have been greatly clarified by the Transactional Interpretation of quantum mechanics (TI), developed over several decades by one of us 2 and recently described in some detail in the book The Quantum Handshake [1], of which this paper is, in part, a review.

Wheeler-Feynman Electrodynamics
The Transactional Interpretation was inspired by classical time-symmetric Wheeler-Feynman electrodynamics [5,6] (WFE), sometimes called "absorber theory".Basically, WFE assumes that electrodynamics must be time-symmetric, with equally valid retarded waves (that arrive after they are emitted) and advanced waves (that arrive before they are emitted).WFE describes a "handshake" process accounting for emission recoil in which the emission of a retarded wave stimulates a future absorber to produce an advanced wave that arrives back at the emitter at the instant of emission.WFE is based on electrodynamic time symmetry but has been shown to be completely interchangeable with conventional classical electrodynamics in its predictions.
WFE asserts that the breaking of the intrinsic time-symmetry to produce the electromagnetic arrow of time, i.e., the observed dominance of retarded radiation and absence of advanced radiation in the universe, arises from the presence of more absorption in the future than in the past.In an expanding universe, that assertion is questionable.One of us has suggested an alternative cosmological explanation [7], which employs advanced-wave termination and reflection from the singularity of the Big Bang.

The Transactional Interpretation of Quantum Mechanics
The Transactional Interpretation of quantum mechanics [1] takes the concept of a WFE handshake from the classical regime into the quantum realm of photons and massive particles.The retarded and advanced waves of WFE become the quantum wave functions ψ and ψ*.Note that the complex conjugation of ψ* is in effect the application of the Wigner time-reversal operator, thus representing an advanced wave function that carries negative energy from the present to the past.Some critics of the Transactional Interpretation have questioned why it does not provide a detailed mathematical description of how a transaction forms 3 .This question, of course, betrays a fundamental 1 The missing mechanism behind wave function collapse is sometimes called "the measurement problem", particularly by acolytes of Heisenberg's knowledge interpretation.In our view, measurement requires wave function collapse but does not cause it.
misunderstanding of what an interpretation of quantum mechanics actually is.In our view, the mathematics is (and should be) exclusively contained in the quantum mechanics formalism itself.The function of the interpretation is to interpret that mathematics, not to introduce some additional variant formalism. 4In the present work, which is inspired by but is not a part of the Transactional Interpretation, we use the standard Schrödinger wave mechanics formalism with the inclusion of retarded and advanced electromagnetic four-potentials to describe and illuminate the processes of transaction formation and the collapse of the wave function.
The TI leans heavily on the standard formalism of Schrödinger wave mechanics.However, that formalism is conventionally regarded as not containing any mathematics that explicitly accounts for wave function collapse (which the TI interprets as transaction formation).Here we show that this is incorrect, and that the Schrödinger formalism with the inclusion of retarded and advanced 4-potentials can provide a detailed mathematical description of a "quantum-jump" in which what seems to be a photon is emitted by one hydrogen atom in an excited state and excites another hydrogen atom initially in its ground state.Thus, the mysterious process of wave function collapse becomes just a phenomenon involving an exchange of waves that is actually a part of the Schrödinger formalism.
As illustrated schematically in Fig. 1, the process described involves the initial existence in each atom of a very small admixture of the wave function for the opposite state, thereby forming two-component states in both atoms.This causes them to become weak dipole radiators oscillating at the same difference-frequency ω 0 .The interaction that follows, characterized by a retarded-advanced exchange of 4-vector potentials, leads to an exponential build-up of a transaction, resulting in the complete transfer of one photon-worth of energy hω 0 from one atom to the other.This process is described in more detail below.4 We note, however, that this principle is violated by the Bohm-de Broglie "interpretation" with its "quantum potentials" and uncertainty-principle-violating trajectories, by the Ghirardi-Rimini-Weber "interpretation" with its nonlinear stochastic term, and by many other so-called interpretations that take the questionable liberty of modifying the standard QM formalism.In that sense, these are alternative variant quantum theories, not interpretations at all.

Figure 1.
Model of transaction formation: An emitter atom 2 in a space-antisymmetric excited state of energy E 2 and an absorber atom 1 in a space-symmetric ground state of energy E 1 both have slight admixtures of the other state, giving both atoms dipole moments that oscillate with the same difference-frequency If the relative phase of the initial small offer wave ψ and confirmation wave ψ * is optimal, this condition initiates energy transfer, which avalanches to complete transfer of one photon-worth of energy hω 0 .

Physical Mechanism of the Transfer
The standard formalism of QM consists of a set of arbitrary rules, conventionally viewed as dealing only with probabilities.When illuminated by the TI, that formalism hints at an underlying physical mechanism that might be understood, in the usual sense of the concept understood.The first glimpse of such an understanding, and of the physical nature of the transactional symmetry, was suggested by Gilbert N. Lewis in 1926 [8], the same year he gave the energy transfer the name "photon": It is generally assumed that a radiating body emits light in every direction, quite regardless of whether there are near or distant objects which may ultimately absorb that light; in other words that it radiates "into space"...I am going to make the contrary assumption that an atom never emits light except to another atom...I propose to eliminate the idea of mere emission of light and substitute the idea of transmission, or a process of exchange of energy between two definite atoms...Both atoms must play coordinate and symmetrical parts in the process of exchange...In a pure geometry it would surprise us to find that a true theorem becomes false when the page upon which the figure is drawn is turned upside down.A dissymmetry alien to the pure geometry of relativity has been introduced by our notion of causality.
In what follows we demonstrate that the pair of coupled Schrödinger equations describing the two atoms, as coupled by a relativistically correct description of the electromagnetic field, exhibit a unique solution.This solution has exactly the symmetry of the TI and thus provides a physically understandable mechanism for the experimentally observed behavior: Both atoms, in the words of Lewis, "play coordinate and symmetrical parts in the process of exchange." The solution gives a smooth transition in each of the atomic wave functions, brought to abrupt closure by the highly nonlinear increase in coupling as the transition proceeds.The origin of statistical behavior and "quantum randomness" can be understood in terms of the random distribution of wave-function amplitudes and phases provided by the perturbations of the many other potential recipient atoms; no "hidden variables" are required.These findings might be viewed as a first step towards a physical understanding of the process of quantum energy transfer.
We will close by indicating the deep, fundamental questions that we have not addressed, and that must be understood before anything like a complete physical understanding of QM is in hand.

Quantum States
In 1926, Schrödinger, seeking a wave-equation description of a quantum system with mass, adopted Planck's notion that energy was somehow proportional to frequency together with deBroglie's idea that momentum was the propagation vector of a wave and crafted his wave equation for the time evolution of the wave function Ψ [9]: Here V is the electrical potential, m is the electron mass, and q is the (negative) charge on the electron.So what is the meaning of the wave function Ψ that is being characterized?In modern treatments Ψ is called a "probability amplitude", which has only a probabilistic interpretation.In what follows, however, we return to Schrödinger's original vision, which provides a detailed physical picture of the wave function and how it interacts with other charges: The hypothesis that we have to admit is very simple, namely that the square of the absolute value of Ψ is proportional to an electric density, which causes emission of light according to the laws of ordinary electrodynamics.
So we can visualize the electron as having a smooth charge distribution in 3-dimensional space, whose density is given by Ψ * Ψ.There is no need for statistics at any point of the calculation, and none of the quantities have statistical meaning.The probabilistic outcome of quantum experiments has the same origin as it does in all other experiments-random perturbations beyond the control of the experimenter.We return to the topic of probability after we have established the nature of the transaction.
For a local region of positive potential V, for example near a positive proton, the negative electron's wave function has a local potential energy (qV) minimum in which the electron's wave function can form local bound states.The spatial shape of the wave function amplitude is a tradeoff between getting close to the proton, which lowers its potential energy, and bunching together too much, which increases its ∇ 2 "kinetic energy."Eq. 1 is simply a mathematical expression of this tradeoff.A discrete set of states called eigenstates are standing-wave solutions of Eq. 1 of the form Ψ = Re −iωt , where R and V are functions of only the spatial coordinates, and the angular frequency ω is itself not a function of time.For the hydrogen atom, the potential V = 0 q p /r, where q p is the positive charge on the nucleus.Two of the lowest-energy solutions to Eq. 1 with this potential are: where the dimensionless radial coordinate r is the radial distance divided by the Bohr radius a 0 : and θ is the azimuthal angle from the North Pole (+z axis) of the spherical coordinate system.The spatial "shape" of the two lowest energy eigenstates for the hydrogen atom is shown in Fig. 2.Here we focus on the excited-state wave function Ψ 210 that has no angular momentum projection on the z-axis.For the moment, we set aside the wave functions Ψ 21±1 , which have +1 and −1 angular momentum z-axis projections.Because, for any single eigenstate, the electron density is  Already in 1926 Schrödinger had derived the energies and wave functions for the stationary solutions of his equation for the hydrogen atom.His physical insight, that the absolute square Ψ * Ψ of the wave function was the electron density, had enabled him to work out the energy shifts of these levels caused by external applied electric and magnetic fields, the expected strengths of the transitions between pairs of energy levels, and the polarization of light from certain transitions.These predictions could be compared directly with experimental data, which had been previously observed but not understood.He reported that these calculations were: ...not at all difficult, but very tedious.In spite of their tediousness, it is rather fascinating to see all the well-known but not understood "rules" come out one after the other as the result of familiar elementary and absolutely cogent analysis, like e.g. the fact that 2π 0 cos mφ cos nφ dφ vanishes unless n = m.Once the hypothesis about Ψ * Ψ has been made, no accessory hypothesis is needed or is possible; none could help us if the "rules" did not come out correctly.But fortunately they do [9,10].
Schrödinger's approach allows us to describe continuous quantum transitions in a simple and intuitively appealing way by extending the notions of Collective Electrodynamics [11] to the wave function of a single electron.We shall require only the most rudimentary techniques of Schrödinger's original quantum theory.

The Two-State System
Let us first consider a simple two-state system.The system has a single positive charge distribution around which there are two eigenstates, labeled 1 and 2, that an electron can occupy.In State 1, the electron has wave function Ψ 1 and energy E 1 ; in State 2, it has wave function Ψ 2 and energy E 2 > E 1 : where We visualize the wave function as a representation of a totally continuous matter wave, and take R 1 and R 2 as real time-independent functions of the space coordinates.We can interpret all of the usual operations involving the wave function as methods for computing properties of this continuous distribution.The only particularly quantal assumption involved is that the wave function obeys a normalization condition: where the integral is taken over the entire three-dimensional volume where Ψ is non-vanishing 5 .This condition assures that the total charge will be a single electronic charge, and the total mass will be a single electronic mass.By construction, the eigenstates of the Schrödinger equation are real and orthogonal: The first moment z of the electron distribution 6 along the atom's z axis is: where, again, the integral is taken over all space where the wave function is non-vanishing.This moment gives the position of the center of negative charge of the electron wave function relative to the positive charge on the nucleus.When multiplied by the electronic charge q, it is called the electric dipole moment q z of the charge distribution of the atom: From Eq. 7 and Eq. 4, the dipole moment for the ith eigenstate is: Pure eigenstates of the system will have a definite parity, i.e., they will have wave functions with either even symmetry [Ψ(z) = Ψ(−z)], or odd symmetry [Ψ(z) = −Ψ(−z)].For either symmetry the integral over R 2 z vanishes, and the dipole moment is zero.We note that even if the wave function did not have even or odd symmetry, the dipole moment, and all higher moments as well, would be independent of time.By their very nature, eigenstates are stationary states and can be visualized as standing-waves-none of their physical spatial attributes can be functions of time.
The notion of stationarity is the quantum answer to the original question about atoms depicted as electrons orbiting a central nucleus: Why doesn't the electron orbiting the nucleus radiate its energy away?In his 1917 book, The Electron, R.A. Millikan 7 anticipates the solution in his comment about the Envelope functions like R 1 and R 2 generally die out exponentially with distance sufficiently far from the "region of interest," such as an atom.Integrals like this one and those that follow in principle extend to infinity but in practice are taken out far enough that the part being neglected is within the tolerance of the calculation.6 In statistical treatments z would be called the "expectation value of z", whereas for our continuous distribution it is called the "average value of z" or the "first moment of z."The electron wave function is a wave packet and is subject to all the Fourier properties of one, as treated at some length in ref. [1].For example a packet only 100 oscillation cycles in duration has a spread in frequency (and therefore energy) of order 1%, resulting, for example, in the observed natural line width of radiation from an atom.Similarly, a packet confined within 100 wavelengths in extent has a spread in wave vector (and therefore momentum) of order 1%.Because statistical QM insisted that electrons were "point particles", one was no longer able to visualize how they could exhibit interference or other wave properties, so a set of rules was constructed to make the outcomes of statistical experiments come out right.Among these was the uncertainty principle, which simply restated the Fourier properties of an object described by waves in a statistical context.No statistical attributes are attached to any properties of the wave function in this treatment.
. . .apparent contradiction involved in the non-radiating electronic orbit-a contradiction which would disappear, however, if the negative electron itself, when inside the atom, were a ring of some sort, capable of expanding to various radii, and capable, only when freed from the atom, of assuming essentially the properties of a point charge.
Ten years before the statistical quantum theory was put in place, Millikan had clearly seen that a continuous, symmetric electronic charge distribution would not radiate, and that the real problem was the assumption of a point charge.The continuous wave nature of the electron implies a continuous charge distribution.That smooth charge distribution can propagate around the nucleus and thereby generate a magnetic moment, as observed in many atoms.The smooth propagation around the nucleus does not imply radiation, since the distribution is not changing with time.

Transitions
In order to radiate electromagnetic energy, the charge distribution must change with time.Because the spatial attributes of a system in a pure eigenstate are stationary, the system in such an eigenstate cannot radiate energy.Because the eigenstates of the system form a complete basis set, any behavior of the system can be expressed by forming a linear combination (superposition) of eigenstates.The general form of such a superposition of two eigenstates is: where a and b are real amplitudes, and φ a and φ b are real constants that determine the phases of the oscillations ω 1 and ω 2 .
Using ω 0 = ω 2 − ω 1 and φ = φ a − φ b , the charge density ρ of the two-component-state wave function is: So the charge density of the two-component wave function is made up of the charge densities of the two separate wave functions, shown in Fig. 4, plus a term proportional to the product of the two wave function amplitudes.It reduces to the individual charge density of the ground state when a = 1, b = 0 and to that of the excited state when a = 0, b = 1.The product term, shown in green in Fig. 5, is the only non-stationary term; it oscillates at the transition frequency ω 0 .The integral of the total density shown in the right-hand plot is equal to 1 for any phase of the cosine term, since there is only one electron in this two-component state.
All the Ψ * Ψ plots represent the density of negative charge of the electron.The atom as a whole is neutral because of the equal positive charge on the nucleus.The dipole is formed when the center of charge of the electron wave function is displaced from the central positive charge of the nucleus.√ 2 superposition of the ground state (R 2 1 , blue) and first excited state (R 2 2 , red) of the hydrogen atom.The green curve is a snapshot of the time-dependent R 1 R 2 product term, which oscillates at difference frequency ω 0 .Right: Snapshot of the total charge density, which is the sum of the three curves in the left plot.The magnitudes plotted are the contribution to the total charge in an x − y "slice" of Ψ * Ψ at the indicated z coordinate.All plots are shown for the phase such that cos(ω 0 t + φ) = 1.The horizontal axis in each plot is the spatial coordinate along the z axis of the atom, given in units of the Bohr radius a 0 .Animation here [32] The two-component wave function must be normalized, since it is the state of one electron: Recognizing from Eq. 5 and Eq. 6 that the individual eigenfunctions are normalized and orthogonal: Eq. 12 becomes So a 2 represents the fraction of the two-component wave function made up of the lower state Ψ 1 , and b 2 represents the fraction made up of the upper state Ψ 2 .The total energy E of a system whose wave function is a superposition of two eigenstates is: Using the normalization condition a 2 + b 2 = 1 and solving Eq. 15 for b 2 , we obtain: In other words, b 2 is just the energy of the wave function, normalized to the transition energy, and using E 1 as its reference energy.Taking E 1 as our zero of energy and E 0 = E 2 − E 1 , Eq. 16 becomes: Using: the dipole moment of such a superposition can, from Eq. 11, be written: The factor d 12 is called the dipole matrix element for the transition; it is a measure of the maximum strength of the oscillating dipole moment.If one R is an even function of z and the other is an odd function of z, as in the case of the 100 and 210 states of the hydrogen atom, then this factor is nonzero, and the transition is said to be electric dipole allowed.If both R 1 and R 2 are either even or odd functions of z, then this factor is zero, and the transition is said to be electric dipole forbidden.Even in this case, some other moment of the distribution generally will be nonzero, and the transition can proceed by magnetic dipole, magnetic quadrupole, or other higher-order moments.For now, we will concentrate on transitions that are electric dipole allowed.
We have the time dependence of the electron dipole moment q z from Eq. 19, from which we can derive the velocity and acceleration of the charge: where the approximation arises because we will only consider situations where the coefficients a and b change slowly with time over a large number of cycles of the transition frequency: The motion of the electron mass density endows the electron with a momentum p:

Atom in an Applied Field
Schrödinger had a detailed physical picture of the wave function, and he gave an elegant derivation of the process underlying the change of atomic state mediated by electromagnetic coupling. 8 Instead of directly tackling the transfer of energy between two atoms, he considered the response of a single atom to a small externally applied vector potential 9 field A. He found that the immediate effect of an applied vector potential is to change the momentum p of the electron wave function: So the quantity −q ∂A z ∂t acts as a force, causing an acceleration of the electron wave function.This is the physical reason that − ∂A z ∂t can be treated as an electric field E z 10 .
In the simplest case, the qA z term makes only a tiny perturbation to the momentum over a single cycle of the ω 0 oscillation, so its effects will only be appreciable over many cycles.
We consider an additional simplification, where the frequency of the applied field is exactly equal to the transition frequency ω 0 of the atom: In such evaluations we need to be very careful to identify exactly which energy we are calculating: The electric field is merely a bookkeeping device to keep track of the energy that an electron in one atom exchanges with another electron in another atom, in such a way that the total energy is conserved.We will evaluate how much energy a given electron gains from or loses to the field, recognizing that the negative of that energy represents work done by the electron on the source electron responsible for the field.The force on the electron is just qE z .Because E z = ω 0 A sin (ω 0 t), for a stationary charge, the force is in the +z direction as much as it is in the −z direction, and, averaged over one cycle of the electric field, the work averages to zero.However, if the charge itself oscillates with the electric field, it will gain energy ∆E from the work done by the field on the electron over one cycle: where z is the z position of the electron center of charge from Eq. 20.
When the electron is "coasting downhill" with the electric field, it gains energy and ∆E is positive.When the electron is moving "uphill" against the electric field, the electron loses energy and ∆E is negative. 8 The original derivation [13] p.137, is not nearly as readable as that in Schrödinger's second and third 1928 lectures [14], where the state transition is described in section 9 starting at the bottom of page 31, for which the second lecture is preparatory.There he solved the problem more generally, including the effect of a slight detuning of the field frequency from the atom's transition frequency. 9 More about the vector potential in the following section. 10At a large distance from an overall charge-neutral charge distribution like an atom, the longitudinal gradient of the scalar potential just cancels the longitudinal component of ∂ A/∂t, so what is left is E = −∂A ⊥ /∂t, which is purely transverse.
As long as the energy gained or lost in each cycle is small compared with E 0 , we can define a continuous power (rate of change of electron energy), which is understood to be an average over many cycles.The time required for one cycle is 2π ω 0 , so Eq. 25 becomes:

Electromagnetic Coupling
Because our use of electromagnetism is conceptually quite different from that in traditional Maxwell treatments, we review here the foundations of that discipline from the present perspective. 11It is shown in Ref. [11] that electromagnetism is of totally quantum origin.We saw in Eq. 22 that it is the vector potential A that appears as part of the momentum of the wave function, signifying the coupling of one wave function to one or more other wave functions.So, to stay in a totally quantum context, we must work with electromagnetic relations based on the vector potential and related quantities.The entire content of electromagnetism is contained in the relativistically-correct Riemann-Sommerfeld second-order differential equation: where A = [ A, V/c] is the four-potential and J = [ J, cρ] is the four-current, A is the vector potential, V is the scalar potential, J is the physical current density (no displacement current) and ρ is the physical charge density, all expressed in the same inertial frame.
Connection with the usual electric and magnetic field quantities E and B is as follows: So, once we have the four-potential A we can derive any electromagnetic relations we wish.
Eq. 27 has a completely general Green's Function solution for the four-potential A(t) at a point in space, from four-current density J( r, t) in volume element dvol at at distance r = | r| from that point: where r is the distance from element dvol to the point where A is evaluated.
The second-order nature of derivatives in Eq. 27 do not favor any particular sign of space or time.Eq. 29 can be expressed in terms of more familiar E&M quantities: If the charge density occurs as a small, unified "cloud", as is the case for the wave function of an atomic electron, and the motion of the wave function is non-relativistic, the J integral just becomes the movement of the center of charge: Then, from a distance large compared to the size of the wave function, r can be taken from the time-average center of charge, and, as we have chosen previously, the motion is in the z direction: If, in addition, the time-average center of charge is stationary relative to the point where A z is measured: It is this form that we shall use for the simple examples presented.An important difference between standard Maxwell E&M practice and our use of the four-potential to couple atomic wave functions is highlighted by Wheeler and Feynman [5]: There is no such concept as "the" field, an independent entity with degrees of freedom of its own.
The field is simply a convenient bookkeeping device for keeping track of the total effect of an arbitrary number of charges on a particular charge at some position in space.The fact that the field is "radiating into space" does not imply that it is carrying energy with it.Energy is only transferred at the position of another charge.Since all charges are the finite charge densities of wave functions, there are no self-energy infinities in this formulation. 12The assumption of point-charges has created untold conceptual complications, as noted above in our discussion of stationary states.
Although no energy is carried by "the" field, because it has no degrees of freedom of its own, electrical engineers have developed extremely clever methods for using it as a bookkeeping method for keeping track of the energy passing between distant charge distributions by pretending that the energy is carried in the field.One of the most ingenious is the Poynting Vector S, signifying the flow of power (energy per unit time) per unit area: 12 Richard Feynman is famously known [18] to have abandoned his own WFE because the published version made the assumption (false and unnecessary [1]) that an electric charge does not interact with the field it produces.That assumption was made to eliminate the self-energy infinities of point-like electrons.However, Feynman later realized that some self-energy was needed to explain the Lamb shift.Note that no such infinities exist in the present approach because the electron wave function cannot contract to a point.We take as an example a z-polarized plane wave of amplitude A propagating in the x direction: where the braces signify {x, y, z} vectors, and the final form recognizes that, in free space, kc = ω.
We have ascertained how an atom in a superposed state of two eigenstates can gain or lose energy from/to an external time-varying vector potential.We are naturally led to ask how much energy an atom in such a state would radiate into "free space."Far from an overall charge-neutral charge distribution like an atom, the longitudinal gradient of the scalar potential just cancels the longitudinal component of ∂ A/∂t, so what is left of the propagating wave is the purely transverse component of the vector potential.Since the current in the atom is in the z direction, the vector potential will be in the z direction, but the surviving propagating field at large distance will be only the transverse component A ⊥ .In a spherical coordinate system the transverse component of A z will be A ⊥ = A z sin(θ).At large distances, all propagating waves approach the nature of a plane wave, so we can use Eq.35 in the form: where S out represents the outward-directed energy flow, whose units look like they are watts per m 3 .But we must be careful-because of our plane-wave characterization it is only the power propagating in one direction.So people in the business often give this quantity, called the radiance, whose units are watts per m 3 per steradian.In our simple configuration the atom can be treated as effectively a point source, so the total power P radiated will be just the integral of S out over a spherical surface of radius r: In the specific case of our "radiating atom", from Eq. 20 and Eq.33, we obtain the vector-potential A z at a distance r from the oscillating charge distribution of our atom, neglecting phase and time delay: From the green curve in Fig. 5 we can estimate the dipole matrix element d 12 , which is q times the "length" between the positive and negative "charge lumps", say d 12 ≈ 3qa 0 .So, for the H atom, P rad ≈ 2 × 10 −9 watt.Then from Eq. 17, the time course of the radiation process is given by the relation: which has solution: In the following two sections we will find that atoms spaced by an arbitrary distance exhibit transactions of exactly the same form as shown in Fig. 6.Although the Schrödinger equation itself is linear, state-evolution equations like Eq. 39 are highly nonlinear and can give rise to abrupt changes in state like Eq. 40.This kind of nonlinear behavior is often concealed by the mathematics of statistical approaches to QM.The frequency for this transition is ω ≈ 1.55 × 10 16 /s, so in time τ there are ≈ 2 × 10 6 cycles.This value justifies our assumption that a and b change slowly on the time scale of ω.
The transition of an excited atom into "empty space" is called spontaneous emission and was the subject of considerable debate during the early development of QM.It was originally thought that the spontaneous emission lifetime τ, observed as the sharpness of the spectral line, was a local property of each particular electronic transition.However, in 1985 it was observed [19] that the lifetime was not fixed and could be made much longer if the transition occurred in a waveguide beyond cutoff, thereby showing that the energy from the transition had been propagating outward.This is now widely ascribed to the coupling of the source electron to degrees of freedom of the radiation field.One widely-held viewpoint treats the "quantum vacuum" as being made up of an infinite number of quantum harmonic oscillators.The problem with this view is that each such oscillator would have a zero-point energy that would contribute to the energy density of space in any gravitational treatment of cosmology.Even when the energies of the oscillators are cut off at some high value, the contribution of this "dark energy" is 120 orders of magnitude larger than that needed to agree with astrophysical observations.Such a disagreement between theory and observation (called the "cosmological constant problem"), even after numerous attempts to reduce it, is many orders of magnitude worse than any other theory-vs-observation discrepancy in the history of science.
The TI does not suffer from this serious defect, since its vacuum has no degrees of freedom of its own.Where, then, is the radiated energy going?The obvious candidate is the enormous continuum of states of matter in the early universe, source of the cosmic microwave background, to which atoms here and now are coupled by the quantum handshake.The Poynting vector is just a bookkeeping mechanism for summing up all the full or partial transitions to matter in that continuum.For independent discussions from the two of us, see ref. [11] p.94 and ref. [7].

Two Coupled Atoms
The central point of this paper is to understand the photon mechanism by which energy is transferred from a single excited atom (atom α) to another single atom (atom β) initially in its ground state.We proceed with the simplest and most idealized case of two identical atoms, where: (1) Excited atom α will start in a state where b ≈ 1 and a is very small, but never zero because of its ever-present random statistical interactions with a vast number of other atoms in the universe, and (2) Likewise, atom β will start in a state where a ≈ 1 and b is very small, but never zero for the same reason.
Thus each atom starts in a two-component state that has an oscillating electrical current described by Eq. 20: where we have taken excited atom α as our reference for the phase of the oscillations (φ α = 0), and the approximation assumes that a and b are changing slowly on the scale of ω 0 .
Although that random starting point will contain small excitations of a wide range of phases, we simplify the problem by assuming the following: All of the electric field E β at atom α is supplied by atom β, All of the electric field E α at atom β is supplied by atom α, The dipole moments of both atoms are in the z direction, The atoms are separated by a distance r in a direction orthogonal to z, The vector potential at distance r from a small charge distribution oscillating in the z-direction is, from Eq. 33: Since all electron motions and fields are in the z direction, we can henceforth omit the z subscript.
When the distance r is small compared with the wavelength r 2πc ω 0 the delay r c can be neglected 13 .Using Eq. 23 and Eq.33, the vector potentials and electric fields from the two atoms become: When atom α is subject to electric field E β and atom β is subject to electric field E α , the energy of both atoms will change with time in such a way that the total energy is conserved.Thus the superposition amplitudes a and b of both atoms change with time, as described by Eq. 17 and Eq. 26, from which: (44) 13 See footnote for Eq.23.Since this dimension is of the order of 10 −10 m and the wavelength is of the order of 10 −7 m, this case can be accommodated.We shall find that the results we arrive at here are directly adaptable to the centrally important case in which the atoms are separated by an arbitrarily distance, which will be analyzed in the next section.
From Eq. 44, using the z derivatives from Eq. 20: These equations describe energy transfer between the two atoms in either direction, depending on the sign of sin(φ).For transfer from atom α to atom β, ∂E α /∂t is negative.Since this transaction dominated all competing potential transfers, its amplitude will be maximum, so sin(φ) = −1.If the starting state had been atom β in the excited (b ≈ 1) state, the sin(φ) = +1 choice would have been appropriate.

Using :
sin(φ) = −1 and Eq. 45 becomes: Energy is conserved by the two atoms during the transfer, and the wave functions are normalized: after which substitutions Eq. 47 becomes: which has solution: The direction and magnitude of the entire energy-transfer process is governed by the relative phase φ of the electric field and the electron motion in both atoms: When the electron motion of either atom is in phase with the field, the field transfers energy to the electron, and the field is said to excite the atom.
When the the electron motion has opposite phase from the field, the electron transfers energy "to the field", and the process is called stimulated emission.
Therefore, for the photon transaction to proceed the field from atom α must have a phase such that it "excites" atom β, while the field from atom β must have a phase such that it absorbs energy and "de-excites" atom α.In the above example, that unique combination occurs when sin(φ) = −1.(red) for the Photon transfer of energy E 0 = hω 0 from atom α to atom β, from Eq. 50.Using the lower state energy as the zero reference, E 0 b 2 is the energy of the state.The green curve shows the normalized power radiated by the atom α and absorbed by atom β, from Eq. 49.The optical oscillations at ω 0 are not shown, as they are normally many orders of magnitude faster than the transfer time scale 1/g = τ.In the next section we will find that atoms spaced by an arbitrary distance exhibit transactions of exactly the same form.
The random starting point for the transaction involving one excited atom will contain small excitations of a wide range of phases.Eq. 49 is a highly nonlinear equation-the amplitude of each of those excitations will grow at a rate proportional to its own phase match.Thus the excitation from a random recipient atom that happens to have sin(φ) ≈ −1 will win in the race and become the dominant partner in the coordinated oscillation of both atoms.Thus, we have identified the source of the intrinsic randomness within quantum mechanics, an aspect of the theory that has been considered mysterious since its inception in the 1920s.
Each wave function will thus evolve its motion to follow the applied field to its maximum resonant coupling, and we can take sin(φ) = −1 in these expressions, which we have done in Eq. 47 and Fig. 6.From a TI point of view, both atoms start in stable states, with each having extremely small admixtures of the other state, so that both have very small dipole moments oscillating with angular frequency ω 0 = (E 2 − E 1 )/h.We assume that in atom α this admixture creates an initial positive-energy offer wave that interacts with the small dipole moment of absorber atom β to transfer positive energy, and that in atom β this admixture creates an initial negative-energy confirmation wave from the excited emitter atom α that interacts with the small dipole moment of emitter atom α to transfer negative energy, as shown schematically in Fig. 1.Because of these admixtures, both atoms develop small time-dependent dipole moments that, because of the mixed-energy superposition of states as shown in Fig. 5, oscillate with the same frequency ω 0 = (E 2 − E 1 )/h and act as coupled dipole resonators.The phasing of their resulting waves is such that energy is transferred from emitter to absorber at a rate that initially rises exponentially, as shown in Fig. 6.
Energy transferred from one atom to another causes an increase in the minority state of the superposition, thus increasing the dipole moment of both states and increasing the coupling and, hence, the rate of energy transfer.This self-reinforcing nonlinear behavior gives the transition its initial exponential character, and drives the transaction to its conclusion.
Thus, mutual offer/confirmation perturbations of emitter and absorber acting on each other create a frequency-matched pair of dipole resonators as two-component superposed states, and this dynamically unstable system either exponentially avalanches to the formation of a complete transaction, or it disappears when a competing transaction forms elsewhere.
In a universe full of particles, this process does not occur in isolation, and both emitter and absorber are also randomly perturbed by waves from other systems that can randomly drive the exponential instability in either direction.This is the source of the intrinsic randomness in quantum processes.Ruth Kastner [16] describes this intrinsic randomness as "spontaneous symmetry breaking", which perhaps clarifies the process by analogy with quantum field theory.
We note here that the probability of the transition must depend on two things: the strength of the electromagnetic coupling between the two states, and the degree to which the wave functions of the initial states are superposed.The magnitude of the latter must depend on the environment, in which many other atoms are "chattering" and inducing state-mixing perturbations.Thus, we see the emergence of Fermi's "Golden Rule" [17], the assertion that a transition probability in a coupled quantum system depends on the strength of the coupling and the density of states present to which the transition can proceed.The emergence of Fermi's Golden Rule is an unexpected gift delivered to us by the logic of the present formalism.
It is certainly not obvious a priori that the Schrödinger recipe for the vector potential in the momentum (Eq.22), together with the radiation law from a charge in motion (Eq.33), would conspire to enable the composition of the superposed states of two electromagnetically coupled wave functions to reinforce in such a way that, from the asymmetrical starting state, the energy of one excited atom could explosively and completely transfer to the unexcited atom, as shown in Fig. 6.
If nature had worked a slightly different way, an interaction between those atoms might have resulted in a different phase, and no full transaction would have been possible.The fact that transfer of energy between two atoms has this nonlinear, self-reinforcing character makes possible arrangements like a laser, where many atoms in various states of excitation participate in a glorious dance, all participating at exactly the same frequency and locked in phase.
Why do the signs come out that way?No one has the slightest idea, but the behavior is so remarkable that it has been given a name: Photons are classified as bosons, meaning that they behave that way!That remarkable behavior is not due to any "particle" nature of the electromagnetic field, but to the quantum nature of the states of electrons in atoms, and how the movement of an electron in a superposed state couples to another such electron electromagnetically.The statistical QM formulation needed some mechanism to finalize a transaction and did not recognize the inherent nonlinear positive-feedback that nature built into a pair of coupled wave functions.Therefore, the founders had to put in wave-function collapse "by hand", and it has mystified the field ever since.

Two Atoms at a Distance
We saw that for two atoms to exchange energy, the field E z at atom β must come from atom α, the field E z at atom α must come from atom β, and the phases of the oscillations must stay in coherent phase with a particular phase relation during the entire transition.This phase relation must be maintained even when the two atoms are an arbitrary distance apart.This is the problem we now address.
As before, we consider all electron motions and fields in the z direction, and drop the z subscript.To be definite, we consider the case where the two atoms are separated along the x axis, atom α at x = 0 in the excited state and atom β at x = r in its ground state, so their separation r is orthogonal to the z-directed current in the atoms.The "light travel time" from atom α to atom β is thus ∆t = r/c.What is observed is that the energy radiated by atom α at time t is absorbed by atom β at time t + ∆t: This behavior is familiar from the behavior of a "particle", which carries its own degrees of freedom with it: It leaves z = 0 at time t and arrives at z = r at time t + ∆t after traveling at velocity c.Thus, Lewis' "photon" became widely accepted as just another particle, with degrees of freedom of its own.We shall see that this assumption violates a wide range of experimental findings.
For atom β, Eq. 44 becomes: The retarded field from atom α interacts with the motion of the electron in atom β.The only difference from our zero-delay solution is that the energy transfer has its time origin shifted by ∆t = r/c.This result has required that we choose a positive sign for the ±r/c in Eq. 33.By doing that, we are building in an "arrow of time", a preferred time direction, in the otherwise even-handed formulation.In particular we are assuming that the retarded solution transfers positive energy.So far everything is familiar and consistent with commonly held notions.However the standard picture leaves no way for atom α to lose energy to atom β -It does not conserve energy!When energy is transferred between two atoms spaced apart on the x axis, the field amplitude must be "coordinate and symmetrical" as Lewis described.The field E α (x = r) at the second atom due to the current in the first must be exactly equal in magnitude to the the field E β (x = 0) at the first atom due to the current in the second, but separated in time by ∆t: For atom α, Eq. 44 becomes So the field E β from atom β, which arises from the motion of its electron at time t + ∆t, must arrive at atom α at time t, earlier than its motion by ∆t.The only field that fulfills this condition is the advanced field from atom β, signified by choosing a negative sign for the ±r/c in Eq. 33.That choice uniquely satisfies the requirement for conservation of energy.It also builds complementary "arrows of time" into the formulation-we assume the advanced solution that transfers negative energy to the past.These two assumptions create a new "handshake" symmetry that is not expressed in conventional Maxwell E&M.Once these choices for the ±r/c in Eq. 33 are made, the resulting equations for each of the energy derivatives in Eq. 45 are unchanged when t + ∆t is substituted for t in the expression for ∂E β /∂t.So each transition proceeds in the local time frame of its atom-for all the world (except for amplitude) as if the other atom were local to it.This "locality on the light cone" is the meaning of Lewis' comment: In a pure geometry it would surprise us to find that a true theorem becomes false when the page upon which the figure is drawn is turned upside down.A dissymmetry alien to the pure geometry of relativity has been introduced by our notion of causality.
The dissymmetry that concerned Lewis has been eliminated.This conclusion is completely consistent with the 1909 formulation of Einstein [22], who was critical of the common practice of simply ignoring the advanced solutions for electromagnetic propagation: I regard the equations containing retarded functions, in contrast to Mr. Ritz, as merely auxiliary mathematical forms.The reason I see myself compelled to take this view is first of all that those forms do not subsume the energy principle, while I believe that we should adhere to the strict validity of the energy principle until we have found important reasons for renouncing this guiding star.
After defining the retarded solution as f 1 , and the advanced solution as f 2 , he elaborates: Setting f (x, y, z, t) = f 1 amounts to calculating the electromagnetic effect at the point x, y, z from those motions and configurations of the electric quantities that took place prior to the time point t.Setting f (x, y, z, t) = f 2 , one determines the electromagnetic effects from the motions and configurations that take place after the time point t.In the first case the electric field is calculated from the totality of the processes producing it, and in the second case from the totality of the processes absorbing it...Both kinds of representation can always be used, regardless of how distant the absorbing bodies are imagined to be.
The choice of advanced or retarded solution cannot be made a priori: It depends upon the boundary conditions of the particular problem at hand.The quantum exchange of energy between two atoms just happens to require one advanced solution carrying negative energy and one retarded solution carrying positive energy to satisfy its boundary conditions at the two atoms, which then guarantees the conservation of energy.
Thus, the even-handed time symmetry of Wheeler-Feynman electrodynamics [5,6] and of the Transactional Interpretation of quantum mechanics [1], as most simply personified in the two-atom photon transaction considered here, arises from the symmetry of the electromagnetic propagation equations, with boundary conditions imposed by the solution of the Schrödinger equation for the electron in each of the two atoms, as foreseen by Schrödinger.We see that the missing ingredients in previous failed attempts, by Schrödinger and others, to derive wave function collapse from the wave mechanics formalism were: 1. Advanced waves were not explicitly used as a part of the process.2. The nonlinear, self-reinforcing nature of the interaction of two phase-locked wave functions coupled electromagnetically, which drives the transition to completion, was not appreciated.
To keep in touch with experimental reality, we can estimate the "transition time" τ from Eq. 47 and Eq.49: As we did for Eq.39, from the green curve in Fig. 5 we can estimate the dipole matrix element, which is q times the "length" between the positive and negative "charge lumps", say d 12 ≈ 3qa 0 .At the steepest part of the transition, all the a and b terms will be 1/ √ 2, so From any treatment of the hydrogen spectrum we obtain, for the 210→100 transition: so the transition time will be: which we can compare with the spontaneous-emission time in Eq. 39: 12 µ 0 ω 3 ≈ 7.9 × 10 −10 sec. (58) So, to have the transaction to a nearby atom take the same time as a spontaneous transition time, the atoms would need to be within ≈ 5 × 10 −9 m, only ∼100 times the radius of the wave function!If the assumption of the 1/r dependence of the vector potential (r dependence of the transition time) were the whole picture it would take 1/25 of a second for a transaction to complete if the atoms were one meter apart.That seems much too long.There must be something still missing from our calculation!

Paths of Interaction
In the wonderful little book QED [23], our Caltech colleague the late Richard Feynman gives a synopsis of the method by which light propagating along multiple paths initiates a transaction, which he calls an event: Grand Principle: The probability of an event is equal to the square of the length of an arrow called the "probability amplitude."... General Rule for drawing arrows if an event can happen in alternative ways: Draw an arrow for each way, and then combine the arrows ("add" them) by hooking the head of one to the tail of the next.A "final arrow" is then drawn from the tail of the first arrow to the head of the last one.The final arrow is the one whose square gives the probability of the entire event.
Feynman's "arrow" is familiar to every electrical engineer as a phasor, as introduced in the 1894 by Steinmetz [24,25] as an easy way to visualize and quantify phase relations in alternating-current systems.In physics the technique is known as the sum over histories and led to Feynman path integrals.His "probability amplitude" is the amplitude of our vector potential, whose square is the intensity of the light.
Feynman then illustrates his Grand Principle with simple examples how a source of light S at one point in space and time influences a receptor of that light P at another point in space and time, as shown in Fig. 7.It is somewhat unnerving to many people to learn from these examples that the resultant intensity is dependent on every possible path from S to P. We strongly recommend that little book to everyone.That discussion, as well as what follows, details the behavior of highly coherent electromagnetic radiation with a well-defined, highly stable frequency ω and wavelength λ.
Of course we have all been taught that light travels in straight lines which spread out as they radiate from the source, and so the resultant intensity decreases as 1/r 2 , where r is the distance from the source S. But, if the light intensity at P depends on all of the paths, how can this 1/r 2 dependence come about?Well, let's follow Feynman's QED logic 14 : We can see from the "seahorse" phasor diagram at the right of Fig. 7 that the vast majority of the length of the resultant arrow is contributed by paths very near the straight line S-M-P.So let's make a rough estimate of how many paths there are near enough to "count."We can The solid curve on the "TIME" plot shows the propagation time, and hence the accumulated phase, of the corresponding path.Each small arrow on the "TIME" plot is a phasor that shows the magnitude (length) and phase (angle) of the contribution of that path to the resultant total potential at P. The "sea horse" on the far right shows how these contributions are added to form the total amplitude and phase of the resultant potential.(From Feynman QED Fig 35 ) see from the diagram that, once the little arrows are plus or minus 90 • from the phase of the straight line, additional paths just cause the resultant to wind around in a tighter and tighter circle, making no net progress.So the uppermost and lowermost paths that "count" are about a quarter wavelength longer than the straight line.Let's use r for the straight-line distance S-P, λ for the wavelength, and y for the vertical distance where the path intersects the midline above M. Then Pythagoras tells us that the length l/2 of either segment of the path is Therefore the entire path length l is We are particularly interested in atoms at a large distance from each other, and will guess that this means that y is very small compared to r, so all the paths involved are very close to the straight line.We can check that assumption later.Since y 2 /r 2 1 we can expand the square root: So the outermost path that contributes is 2y 2 /r longer than the straight line path.We already decided that maximum extra length of a contributing path would be about a quarter wavelength: We can now check our assumption that y 2 /r 2 1.If r = 1m and our 210 → 100 transition has λ ≈ 10 −7 m, then y 2 /r 2 ≈ 10 −8 , so our assumption is already very good, and gets better rapidly as r gets larger.
How do we estimate the number of paths from S to P? Well, no matter how we choose the path spacing radiating out equally in all directions from S, the number of paths that "count" will be proportional to the solid angle subtended by the outermost such paths.Paths outside that "bundle" will have phases that cancel out as Feynman describes.The angle of the uppermost path is y/r.Paths also radiate out perpendicular to the page to the same extent, so the total number that "count" goes as the solid angle = π(y/r) 2 .Feynman tells us that the resultant amplitude A is proportional to the total number of paths that "count," so we conclude from Eq. 62: So this is the fundamental origin of the 1/r law for amplitudes.The intensity is proportional to the square of the amplitude, and therefore goes like 1/r 2 , as we all learned in school.So, instead of lines of energy radiating out into space in all directions, Feynman's view of the world encourages us to visualize the source of electromagnetic waves as "connected" to each potential receiver by all the paths that arrive at that receiver in phase.Just to convince us that all this "path" stuff is real, Feynman gives numerous fascinating examples where the 1/r 2 law doesn't work at all.Our favorite is shown in Fig. 8: The piece of glass does not alter the amplitude of any individual path very much -it might lose a few percent due to reflection at the surfaces.But it slows down the speed of propagation of the light.And the thickness of the glass has been tailored to slow the shorter paths more than the longer paths, so all paths take exactly the same time.The net result is that the oscillating potential propagating along every path reaches P in phase with all the others!Now we are adding up all the little phasor arrows and they all point in the same direction!We can estimate how the amplitude is affected by this little chunk of glass: For the arrangement shown in Fig. 8, at the size it is printed on a normal-sized page, r ≈ 5cm, so from From Eq. 63, the solid angle of the "bundle" of paths without the glass was of the order of 10 −7 .The solid angle enclosing the paths the glass lens is ≈ 1 steradian, so the little piece of glass has increased the potential at P by seven orders of magnitude, and the intensity of the light by fourteen orders of magnitude!
We have learned an important lesson from Feynman's characterization of propagation phenomena: Changing the configuration of components of the arrangement in what appear to be innocent ways can make drastic differences to the resultant potential at certain locations.The reader will find many other eye-opening examples in QED and FLP I-26.We will find in the next section that two atoms in a "quantum handshake" form a pattern of paths that greatly increases the potential by which the atoms are coupled, and hence shorten the transition by many orders of magnitude.

Global Field Configuration
We are now in a position to visualize the field configuration for the quantum exchange of energy.From Eq. 43 and Eq.41, and using sin(φ) = −1, the total field is composed of the sum of the retarded solution E α from atom α and the advanced solution E β from atom β: Including both x and y coordinates in the distances r α and r β from the two atoms, the vector potentials from the two atoms anywhere in the x − y plane are An example of the total vector potential A tot = A α + A β along the x axis is shown in Fig. 9. Normalized vector potential along the x axis (in wavelength/2π) between two atoms in the "quantum handshake" of Eq. 65.The wave propagates smoothly from atom (left) to atom β (right).Animation here [32] A "snapshot" of the potential of Eq. 65 at one particular time for the full x − y plane is shown in Fig. 10.
The still image in this figure looks like a typical interference pattern from two sources-a "standing wave."There are high-amplitude regions of constructive interference which appear light blue and yellow on this plot.These are separated from each other by low-amplitude regions of destructive interference, which appear green.In a standing wave, these maxima would oscillate at the transition frequency, with no net motion.The animation, however, shows a totally different story: Instead of oscillating in place as they would in a standing wave, the maxima of the pattern are moving steadily from the source atom (left) to the receiving atom (right).This movement is true, not only of the maxima between the two atoms, but of maxima well above and below the line between the two atoms.These maxima can be thought of as Feynman's paths, each carrying energy along its trajectory from atom α to atom β.Two atoms in the "quantum handshake" of Eq. 65.Animation here [32] For those readers that do not have access to the animations, the same story is illustrated by a stream-plot of the Poynting vector in the x-y plane, shown in Fig. 11  We can get a more precise idea of the phase relations by looking at the zero crossings of the potential at one particular time, as shown in Fig. 12. Paths from atom 1 to atom 2 can be traced through either the high-amplitude regions or the low-amplitude regions.The paths shown in Fig. 12 are traced through high-amplitude regions.
The central set of paths, delimited by the red lines, are responsible for the conventional 1/r dependence of the potential, as described with respect to Eq. 63.Working outward from there, each high-amplitude path region is separated from the next by a slim low-amplitude region.It is a remarkable property of this interference pattern that each low-amplitude path has π more phase accumulation along it than the prior high-amplitude path and π less than the next high-amplitude path.The low-amplitude paths are the ones that contribute to the "de-phasing" in this arrangement, but they are very slim and of low amplitude, so they don't de-phase the total signal appreciably.And the phases of the paths through the high-amplitude regions are separated by 2πn, where n is an integer.All waves propagating from atom α to atom β along high-amplitude paths arrive in phase!In Figure 12.The zero crossings of the handshake vector potential at t = 0. Paths near the axis (between the two red lines) are responsible for the conventional 1/r dependence of the potential.Paths shown through high-amplitude regions have an even number of zero crossings, and thus the potentials traversing these paths all arrive in phase, thus adding to the central potential.
Feynman's example shown in Fig. 7, there are an equal number of paths of any phase, so every one has an opposite to cancel it out.In Fig. 8, the lens makes all paths have equal time delay, which then enables them to all arrive with the same phase.
The phase coherence of the advanced-retarded handshake creates a pattern of potentials that has a unique property: It is not like either of Feynman's examples in Fig. 7 or Fig. 8.Its high-amplitude paths do arrive in phase, but by a completely different mechanism.It all starts with the bundle of paths between the red lines, which has the 1/r amplitude, just as if there were no quantum mechanism.Then, as the handshake begins to form, additional paths are drawn into the process.The process is self-reinforcing on two levels-the exponential increase in dipole moment and the exponential increase in number of paths that arrive in phase.Paths that formerly would have cancelled the in-phase ones are "squeezed" into extremely narrow regions, all of low amplitude, as can be seen in Fig. 12.Thus a large fraction of the solid angle around the atoms is available for the in-phase high-amplitude paths.
Returning to our two H atoms spaced 1 meter apart, we found in Eq. 57 that, using the standard 1/r potential, the "transition time" for the quantum handshake was ≈ .04sec.But that calculation only counted the paths within the innermost solid angle-inside the two red lines in Fig. 12.By Eq. 63, those paths account for a solid angle of: So, if we were able to use the full 4π solid angle due to the "self-focusing" of the quantum handshake, the transition would be shortened by a factor of 4π/ 5 × 10 −8 ≈ 2.5 × 10 −8 , giving a transition time of τ ≈ 1.6 × 10 −10 sec, which is comparable to the spontaneous transition time τ spon ≈ 3.3 × 10 −10 sec given in Eq. 58.In practice, of course, the self-focusing does not make the full 4π solid angle available, so we expect less than the full shortening of the transition.But, after seven orders of magnitude, we can't complain about factors of a few.And, as the distance becomes larger the self-focusing enhancement factor becomes larger as well.
Following Feynman's program has led us to conclude that: The vector potential from all paths sum to make a highly-amplified connection between distant atoms.The advanced-retarded potentials form nature's very own phase-locked loop, which forms nature's own Giant Lens in the Sky!The consequences of this fact are staggering: Once an initial handshake interference pattern is formed between two atoms that have their wave functions synchronized, the strength grows explosively: Not only because the dipole moment of each atom grows exponentially, but, in addition, a substantial fraction of the possible interaction paths between the two atoms propagate through high-amplitude regions, independent of the distance between them!So we have here the solution to the long-standing mystery of the "collapse of the wave function" of the "photon." The interaction depends critically on the advanced-retarded potential handshake to keep all paths in phase.Ordinary propagation over very long paths becomes "de-phased" due to the slightest variations of the propagation properties of the medium.By contrast, the phase of high-amplitude handshake paths are always related by an even number of π to the phase of other such paths.Paths separated by odd numbers of π phase are always of low amplitude, and do not cancel the even-π phase paths as they would in a one-way propagating wave like Feynman used in his illustrations.
The interaction proceeds in the local time frame of each atom because they are linked with the advanced-retarded potential.Because the waves carrying positive energy from emitter to absorber are retarded waves with positive transit time and the waves carrying negative energy from absorber to emitter are advanced waves with negative transit time, aside from time-of-flight propagation time of the transferred energy, there is no "round trip" time delay in the quantum-jump process, and it is effectively instantaneous in each atom's frame of reference.In other words, the perturbations induced by the initial offer/confirmation wave exchange trigger the formation of a full-blown transaction in which one photon-worth of energy E 0 = E 2 − E 1 is transferred from emitter to absorber, taking place with only a propagation-time delay.

Relevance to the Transactional Interpretation
The calculation that we have presented here, with its even-handed treatment of advanced and retarded four-potentials, was inspired by WFE and the Transactional Interpretation, but it also provides interesting insights that clarify and modify the mechanism by which a transaction forms.Wheeler-Feynman electrodynamics suggests that a retarded wave, arriving at a potential absorber, stimulates that entity to generate an advanced wave.The TI suggests that this advanced wave arrives back at the emitter with amplitude ψψ * , a relation suggesting the Born rule.
However, from our calculation we see that a slightly different process is described.Both emitter and absorber initially have small admixtures of the opposite eigenstate, giving them dipole moments that oscillate with the same difference-frequency ω 0 .The two oscillations each have an environment-induced random phase.If the phases have the correct relation [sin(φ) ≈ −1], the system becomes a phase-locked loop and avalanches to a final state of multi-path energy transfer that satisfies all boundary conditions.In that scenario, the initial confirmation wave is likely to have an amplitude much weaker than WFE would suggest, and the quantity ψψ * is used by Schrödinger to provide the electron density function.
How can a linear system generate such nonlinear behavior?While the Schrödinger equation is indeed linear, Eq. 49 governing the evolution of transaction formation and wave function collapse is highly nonlinear.Thus, the TI's assertion that the offer wave "stimulates the generation of the confirmation wave" must be modified.Rather, advanced and retarded potentials, boundary conditions at both ends, and a fortuitous matching of phase trigger the nonlinear avalanche and bring about the transaction.
We also note that the advanced and retarded waves do not carry "information" in the usual sense in either time direction, but only deliver a pair of oscillating four-potentials to the site of a pair of oscillating charges, leading dynamically to an initially exponential rise in coupling , a focusing of alternative paths, the formation of a transaction, a transfer of energy, and the enforcement of conservation laws. 15For the Transactional Interpretation, this phase selection process clarifies the randomizing mechanism by which, in the first stage of transaction formation, the emitter makes a random choice between competing offer waves arriving from many potential absorbers.The offer wave with the best phase is likely to win, even if it comes from far away.
We saw in the derivation of the coupling of two separated atoms that it was necessary to use the advanced 4-potential in order to satisfy the law of conservation of energy.This, not "information transfer", is the role of the quantum handshake in the Transactional Interpretation.The quantum handshakes act to enforce conservation laws and do not form unless all conserved quantities are properly transferred and conserved.This is what is going on in quantum entanglement: the separated parts of a quantum system are linked by conservation laws that are enforced by v-shaped advanced-retarded quantum handshakes [1] and cannot emerge as a completed transaction unless those conservation laws are satisfied.In this context, we note that the Transactioinal Interpretation, using linked advanced-retarded handshakes, is able to explain in detail the behavior of over 26 otherwise paradoxical and mysterious quantum optics experiments and gedankenexperiments 16 .If we cannot eliminate rival QM interpretations based on their failure in experimental tests, we should eliminate them when they fail to explain paradoxical quantum optics experiments (as almost all of them do.)

Conclusion
The development of our understanding of quantum systems began with a physical insight of deBroglie: Momentum was the wave vector of a propagating wave of some sort.Schrödinger is well known for developing a sophisticated mathematical structure around that central idea, only a shadow of which remains in current practice.What is less well known is that Schrödinger also developed a deep understanding of the physical meaning of the mathematical quantities in his formalism.That physical understanding enabled him to see the mechanism responsible for the otherwise mysterious quantum behavior.Meanwhile Heisenberg, dismissing visual pictures of quantum processes, had developed a matrix formulation that dealt with only the probabilities of transitions-what he called "measurables."It looked for a while as if we had two competing quantum theories, until Schrödinger and Dirac showed that they gave the same answers.But the stark contrast between the two approaches was highlighted by the ongoing disagreement in which Bohr and Heisenberg maintained that the transitions were events with no internal structure, and therefore there was nothing left to be understood, while Einstein and Schrödinger believed that the statistical formulation was only a stopgap and that a deeper understanding was possible and was urgently needed.This argument still rages.
Popular accounts of these ongoing arguments, unfortunately, usually focus on the 1930 Solvay Conference confrontation between Bohr and Einstein that was centered around Einstein's clock paradox, an attempted refutation of the uncertainty principle [26].Einstein is generally considered to have lost to Bohr because he was "stuck in classical thinking."However, as detailed in The Quantum Handshake [1], the uncertainty principle is simply a Fourier-algebra property of any system described by waves.Both parties to the Solvay argument lacked real clarity concerning how to handle the wave nature of matter.The real concept that should have been at issue was: Is there a deeper structure to the quantum transition?Back in 1926 the field was faced with a choice: Schrödinger's wave function in 3-dimensional space, or Heisenberg and Born's Matrices, in as many dimensions as you like.
The choice was put forth clearly by Hendrik Lorentz [27] in a letter to Schrödinger in May, 1926: If I had to choose now between your wave mechanics and the matrix mechanics, I would give the preference to the former, because of its greater intuitive clarity, so long as one only has to deal with the three coordinates x,y,z.If, however, there are more degrees of freedom, then I cannot interpret the waves and vibrations physically, and I must therefore decide in favor of matrix mechanics.But your way of thinking has the advantage for this case too that it brings us closer to the real solution of the equations; the eigenvalue problem is the same in principle for a higher dimensional q-space as it is for a three-dimensional space.
There is another point in addition where your methods seem to me to be superior.Experiment acquaints us with situations in which an atom persists in one of its stationary states for a certain time, and we often have to deal with quite definite transitions from one such state to another.Therefore we need to be able to represent these stationary states, every individual one of them, and to investigate them theoretically.Now a matrix is the summary of all possible transitions and it cannot at all be analyzed into pieces.In your theory, on the other hand, in each of the states corresponding to the various eigenvalues, E plays its own role.
Thus the real choice was between the intuitive clarity of Schrödinger's wave function and the ability of Heisenberg and Born's matrix mechanics to handle more degrees of freedom.That ability was immediately put to the test when Heisenberg [28] worked out the energy levels of the helium atom, in which two electrons shared the same orbital state and their correlations could not be captured by wave functions with only three degrees of freedom.That amazing success set the field on the path to eschewing Schrödinger's views and moving into multi-dimensional Hilbert space, which was further ossified by Dirac and von Neumann.Schrödinger's equation had been demoted to a bare matrix equation, engendering none of the intuitive clarity, the ability to interpret the waves and vibrations physically so treasured by Lorentz.
The matrix formulation of statistical QM, as now universally taught in physics classes, saves us the "tedious" process of analyzing the details of the transaction process.That's the good news.The bad news is that it actively prevents us from learning anything about the transaction process even if we want to!Meanwhile, it was left to the more practical-minded electrical engineers and applied scientists to resurrect, each in their own way, Schrödinger's way of thinking.Electrons in conductors were paired into standing waves, which could carry current when the propagation vector of one partner was increased and that of the other partner decreased.Energy gaps resulted from the interaction of the electron wave functions with the periodic crystal lattice.Those same electron wave functions can "tunnel" through an energy gap where they decay exponentially with distance.The electromagnetic interaction of the collective wave functions in superconducting wires leads to a new formulation of the laws of electromagnetism without the need for Maxwell's equations [11].The field of Quantum Optics was born.Conservation of momentum became the matching of wavelengths of waves such that interaction can proceed.When one such wave is the wave function of an electron in the conduction band and the other is the wave function of a hole in the valence band of a semiconductor, matching of the wavelengths of electron, hole, and photon leads to light emission near the band-gap energy.
When that emission intensity is sufficient, the radiation becomes coherent-a semiconductor laser.These insights, and many more like them, have made possible our modern electronic technology, which has transformed the entire world around us.Each of them requires that, as Lorentz put it: we... represent these stationary states, every individual one of them, and to investigate them theoretically.Each of them also requires that we analyze the transaction involved very much the way we have done in this paper.
What we have presented 17 is a detailed analysis of the most elementary form of quantum transition, indicating that the simplest properties of solutions of Schrödinger's equation for single-electron atomic states, the conservation of energy, and the known laws of electromagnetic propagation, together with Feynman's insight that all paths should be counted, give a unique form to the photon transaction between two atoms.This calculation is, of course, not a general proof that in every system the offer/confirmation exchange always triggers the formation of a transaction.It does represent a demonstration of that behavior in a tractable case, and it constitutes a prototype of more general transaction behavior.It further demonstrates that the transaction model is implicit in and consistent with the Schrödinger wave mechanics formalism, and it demonstrates how the transaction, as a space-time standing wave connecting emitter to absorber, can form.
We see that the missing ingredients in previous failed attempts by others to derive wave function collapse from the standard quantum formalism were: 1. Advanced waves were not explicitly used as part of the process.2. The "focusing" property of the advanced-retarded radiation pattern had not been identified.3. The increase in dipole moment with progression of the transition was not recognized.
Although many complications are avoided by the simplicity of this two-atom system, it clearly illustrates that there is internal structure to quantum transitions and that this structure is amenable to physical understanding.Through the Transactional Interpretation, the standard quantum formalism is seen as an ingenious shorthand for arriving at probabilities without wading through the underlying details that Schrödinger described as "tedious." Although the internal mechanism detailed above is of the simplest form, it describes the most mysterious behavior of quantum systems coupled at a distance, as detailed in [1].All of these behaviors can be exhibited by single-electron quantum systems coupled electromagnetically.The only thing "mysterious" about our development is our unorthodox use of the advanced-retarded electromagnetic solution to conserve energy and speed up the transition.Therefore, we have learned some interesting things by analyzing this simple transaction!This experience brings us face to face with the obvious question: What if Einstein was right?If there is internal structure to this simple quantum transition, there must also be internal structure to the more complex transitions involving more than one electron!In which case we should find a way to look for it.That would require that we time-travel back to 1926 and grok the questions those incredibly talented scientists were grappling with at the time.
To face into the conceptual details of a transaction involving a multi-electron system is a daunting task that has defeated every attempt thus far.We strongly suspect that the success achieved by the matrix approach-adding three more space dimensions and one spin dimension for each additional electron-came at the cost of being "lost in multi-dimensional Hilbert space."Heisenberg's triumph with the helium atom led into a rather short tunnel that narrowed rapidly in the second row of the periodic table.
Quantum chemists work with complex quantum systems that share many electrons in close proximity, and thus must represent many overlapping degrees of freedom.Their primary goal is to find the ground state of such systems.Lorentz's hope-that the intuitive insights of Schrödinger's wave function in 3 dimensions would bring us closer to the real solution in systems with more than one electron-actually helped in the early days of quantum chemistry: Linus Pauling visualized chemical bonds that way, and made a lot of progress with that approach.It is quite clear that the covalent bond has a wave function in three dimensions, even if we don't yet have a fully "quantum" way of handling it in three dimensions.The Hohenberg-Kohn theorems [29] demonstrate that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates.So the chemists have a 3-dimensional wave function for many electrons!They use various approaches to minimize the total energy, which then gives the best estimate of the true ground state.
These approaches have evolved into Density Functional Theory (DFT), and are responsible for amazingly successful analyses of an enormous range of complex chemical problems.The original Thomas-Fermi-Hohenberg-Kohn idea was to make the Schrödinger equation just about the 3d density.The practical implementations do not come close to the original motivation because half-integer spin, Pauli exclusion, and 3N dimensions are still hiding there.Also the hybrid density functionals that are good for complex chemical calculations are non-local in the worst possible way (so-called exact exchange operators), obscuring the concept of a 3d density.DFT, as it stands today, is a practical tool for generating numbers rather than a fundamental way of thinking.Although it seems unlikely at present that a more intuitive view of the multi-electron wave function will emerge from DFT, the right discovery of how to adapt 3d thinking to the properties of electron pairs could be a major first step in that direction.
When we look at the simplest two-electron problems, we see that our present understanding uses totally ad hoc rules to eliminate configurations that are otherwise sensible: The most outrageous of these is the Pauli Exclusion Principle, most commonly stated as: 18Two electrons can only occupy the same orbital state if their spins are anti-parallel.It is the reason we have the periodic table, chemical compounds, solid materials, electrical conductors.It is just a rule, with no underlying physical understanding.And we have only mathematics to cover our ignorance of why it is true physically. 19here is no shame in this-John Archibald Wheeler said it well: We live on an island surrounded by a sea of ignorance.As our island of knowledge grows, so does the shore of our ignorance.
The founders made an amazing step forward in 1926.Any forward step in science always opens new questions that we could not express previously.But we need to make it absolutely clear what it is that we do not yet understand: • We do not yet understand the mechanism that gives the 3d wave function its "identity", which causes it to be normalized.• We do not yet have a physical picture of how the electron's wave function can be endowed with half-integer "spin", why it requires a full 720 • (twice around) rotation to bring the electron's wave function back to the same state, why both matter and antimatter states exist, and why the two have opposite parity.• We do not yet have a physical understanding of how two electron wave functions interact to enforce Pauli's Exclusion Principle.
the charge density is not a function of time, so none of the other properties of the wave function change with time.The individual eigenstates are thus stationary states.The lowest energy bound eigenstate for a given form of potential minimum is called its ground state, shown on the left in Fig.3.The corresponding charge densities are shown in Fig.4.

Figure 2 .Figure 3 .
Figure 2. Angular dependence of the spatial wave function amplitudes for the lowest (100, left) and next higher (210, right) states of the hydrogen atom, plotted as unit radius in spherical coordinates from Eq. 2. The 100 wave function has spherical symmetry: Positive in all directions.The 210 wave function is antisymmetric along the z axis, as indicated by the color change.In practice the direction of the z axis will be established by an external electromagnetic field, as we shall analyze shortly.

Figure 4 .
Figure 4. Contribution of x − y "slices" at position z of wave function density Ψ * Ψ to the total charge or mass of the 100 and 210 states of the hydrogen atom.Both curves integrate to 1.

Figure 5 .
Figure 5. Left: Plot of the three terms in the wave-function density in Eq. 11 for an equal a = b = 1/ √ 2

Figure 6 .
Figure 6.Squared state amplitudes for atom α: b 2α (blue) and a 2 α = b 2 β (red) for the Photon transfer of energy E 0 = hω 0 from atom α to atom β, from Eq. 50.Using the lower state energy as the zero reference, E 0 b 2 is the energy of the state.The green curve shows the normalized power radiated by the atom α and absorbed by atom β, from Eq. 49.The optical oscillations at ω 0 are not shown, as they are normally many orders of magnitude faster than the transfer time scale 1/g = τ.In the next section we will find that atoms spaced by an arbitrary distance exhibit transactions of exactly the same form.

Figure 7 .
Figure 7.All the paths from coherent light source S to detector P are involved in the transfer of energy.The solid curve on the "TIME" plot shows the propagation time, and hence the accumulated phase, of the corresponding path.Each small arrow on the "TIME" plot is a phasor that shows the magnitude (length) and phase (angle) of the contribution of that path to the resultant total potential at P. The "sea horse" on the far right shows how these contributions are added to form the total amplitude and phase of the resultant potential.(FromFeynman QED Fig 35)

Figure 8 .
Figure 8. Situation identical to Fig. 7 but with a piece of glass added.The shape of the glass is such that all paths from the source S reach P in phase.The result is an enormous increase in the amplitude reaching P. (From Feynman QED Fig. 36)

Figure 9 .
Figure9.Normalized vector potential along the x axis (in wavelength/2π) between two atoms in the "quantum handshake" of Eq. 65.The wave propagates smoothly from atom (left) to atom β (right).Animation here[32]