Stochastic Interpretation of Quantum Mechanics Assuming That Vacuum Fields Are Real

: We characterize the electromagnetic vacuum as a stochastic ﬁeld. Some consequences, like the particle behaviour of light, are studied. The stochastic approach is connected with the standard Hilbert space formalism via the Weyl transform. Several experiments involving spontaneous parametric down conversion are studied comparing Hilbert space and Weyl–Wigner formalisms. This allows an intuitive picture of entanglement to be obtained as a correlation between ﬁeld ﬂuctuations in distant places, involving the vacuum ﬁelds. The analysis shows that the Bell deﬁnition of local realism is not general enough, whence the reported violation of Bell inequalities does not refute local realism.


Introduction
Almost one century after the discovery of quantum mechanics we still lack any consensus about what one is actually talking about as one uses it. "There is a gap between the abstract terms in which the theory is couched and the phenomena the theory enables each of us to account for so well. Because it has no practical consequences for how we each use quantum mechanics to deal with physical problems, this cognitive dissonance has managed to coexist with the quantum theory from the very beginning" [1].
The discrepancy about the correct approach to the theory appeared very early, two extremes corresponding to the creators of 'wave mechanics' (de Broglie, Schrödinger) and those of 'quantum mechanics' (Heisenberg, Bohr, Pauli). People in the former group attempted to get a picture of the microworld, without real success. Those in the latter group supported the view that a picture of reality is not needed. The absence of a satisfactory picture, in spite of a big effort by some people, combined with the mathematical elegance of the (Hilbert space) formalism of quantum mechanics plus its spectacular success in the quantitative predictions of empirical evidence, led the mainstream of the community to support the Heisenberg-Bohr view (the Copenhagen interpretation). In recent times, alternative interpretations have been proposed, like 'many worlds', bizarre in my opinion, or explanations for the lack of consensus, like QBism. The latter rests on the belief that "the absence of conceptual clarity for almost a century suggests that the problem might lie in some implicit misconceptions about the nature of scientific explanation" [1]. In Section 2 of this article I will provide arguments for both the possibility and the usefulness of a realistic interpretation of quantum mechanics. See also [2].
I believe that in order to achieve a realistic interpretation, we must assume that quantum vacuum fields are real. On the other hand, a plausible explanation for the stability of atoms, without departing from Maxwell theory, is the existence of a background radiation filling space. The conjunction of the two facts suggests the identification of the background radiation with the quantum vacuum electromagnetic field, taken as a stochastic field. Indeed, relativistic invariance leads to the spectrum of the possible radiation, modulo a unique parameter fixing the scale. If we identify that parameter with the Planck constant, there is agreement between the properties of the assumed background radiation and the quantum vacuum field. Section 3 of the article provides a more detailed exposition of this idea.
In Section 4, we study the characterization of the vacuum electromagnetic radiation as a stochastic field. Section 5 deals with the particle properties of light explained as due to the action of the stochastic vacuum fields. The quantitative connection of the stochastic approach with the standard quantum Hilbert space formalism is made via de Weyl transform, studied in Section 6. Section 7 shows that entanglement and the violation of Bell inequalities may be understood as effects of the vacuum fields. Finally, in Section 8, I offer several ideas for the search of a more complete realistic interpretation of quantum theory.

Pragmatic Approach
None of the interpretations of quantum mechanics proposed till now [3] offer a clear intuitive picture of the quantum world. Nevertheless, most physicists do not worry for the lack of a picture and embrace a pragmatic approach close to the early proposal of Bohr and Heisenberg, usually known as the Copenhagen interpretation [4].
Behind the pragmatic approach there is usually a philosophical position about physics (or science in general) that may be summarized as follows. It is taken for granted that a physical theory has at least two components [5]: (1) the formalism, or mathematical apparatus, of the theory, and (2) the correspondence rules that establish a link between the formalism and the results of observations or measurements. As an example, let us consider the formalism of quantum mechanics based on the mathematical theory of Hilbert spaces. The formalism involves two kinds of operators, density operators,ρ, that represent states, and self-adjoint operators,Â, that represent observables. The link with the measurement results is given by the postulate that the expectation value, Tr ρÂ , corresponds to the statistical mean of the values obtained when one realizes several measurements on identically prepared systems (which determinesρ) by means of an appropriate apparatus (that corresponds toÂ).
If we assume that the formalism and the correspondence rules are the only objects required to define a physical theory, in the sense that the statistical regularities need not be further explained, then we get what has been called a minimal instrumentalistic interpretation of the theory [4,6]. It may be identified with the purely pragmatic approach mentioned above.
Most people claiming to support that approach accept the following positions: 1. The notion of an individual physical system 'having' or 'possessing' values for all its physical quantities is inappropriate in the context of quantum theory. 2. The concept of 'measurement' is fundamental in the sense that the scope of quantum theory is intrinsically restricted to predicting the results of measurements. 3. The spread in the results of measurements on identically prepared systems must not be interpreted as reflecting a 'lack of knowledge' of some objectively existing state of affairs.
The instrumentalistic approach is quite different from, or even opposite to, the realistic view traditional in classical physics. Between these two extremes there are a variety of approaches.

Realistic Interpretation
The main opponent to the purely pragmatic approach to quantum mechanics was Albert Einstein. Indeed, his discussions with Niels Bohr are the paradigm of a scientific debate, hard in the scientific arguments but hearty from the personal point of view. One of the most celebrated moments of the debate was a 1935 article by Einstein, Podolsky and Rosen [7] (EPR). It begins as follows: "Any serious consideration of a physical theory must take into account the distinction between the objective reality, which is independent of any theory, and the physical concepts with which the theory operates. These concepts are intended to correspond with the objective reality, and by means of these concepts we picture this reality to ourselves" (my emphasis).
I strongly support Einstein's view, that is, I believe that a realistic interpretation is possible. The main point is the claim that any physical theory should offer a physical model in addition to the formalism and rules for the connection with the experiments. The latter are obviously essential because they are required for the comparison of the theory with empirical evidence, which is the test for the validity of the theory. In my opinion, physical models are also necessary in order to reach a coherent picture of the world. Many quantum physicists apparently support the uselessness of pictures, but it is the case that when they attempt popular explanations of quantum phenomena they frequently propose actual pictures, many of them rather bizarre. For instance, it has been claimed that quantum mechanics compel us to believe that there are a multiplicity of 'me' in parallel universes (the many worlds interpretation) or that an atom may be present in two distant places at the same time. This is an indication that the need of "picture the reality to ourselves" [7] cannot be easily dismissed. Furthermore, the existence of physical models might open the possibility for new developments and applications of quantum theory and, therefore, it is not a purely academic question.
An illuminating confrontation between pragmatic and realistic epistemologies is the conversation of Heisenberg with Einstein that took place in Berlin 1926, as remembered by Heisenberg himself [8]. The most relevant part is reproduced in the following: "Einstein opened the conversation with a question that bore on the philosophical background of my recent work. 'What you have told us sounds extremely strange. You assume the existence of electrons inside the atom, and you are probably quite right to do so. But you refuse to consider their orbits, even though we can observe electron tracks in a cloudchamber. I should very much like to hear more about your reasons for making such strange assumptions'. 'We cannot observe electron orbits inside the atom', I must have repIied, 'but the radiation which an atom emits during discharges enables us to deduce the frequencies and corresponding amplitudes of its electrons. After all, even in the older physics wave numbers and amplitudes could be considered substitutes for electron orbits. Now, since a good theory must be based on directly observable magnitudes, I thought it more fitting to restrict myself to these, treating them, as it were, as representatives of the electron orbits.' 'But you don't seriously believe', Einstein protested, 'that none but observable magnitudes must go into a physical theory?'.
The conversation continued for a while and at the end Einstein warned: "You are moving on very thin ice. For you are suddenly speaking of what we know about nature and no longer about what nature really does. In science we ought to be concerned solely with what nature does" (my emphasis). Einstein arguments are a clear support to a realistic epistomology, and I fully agree with his views about the foundations of quantum physics.
I propose that the difficulties for a realistic interpretation of quantum phenomena do not derive from the empirical facts, or not only. Nevertheless, most textbooks of quantum mechanics emphasize the difficulty, or impossibility, in interpreting typical quantum phenomena with a realistic view. The purpose of this article is to show that, in fact, those phenomena are compatible with a picture of the microworld. Of course, the picture is somewhat different from the one offered by classical physics, but not dramatically different.

Vacuum Fields, the Clue for a Realistic Interpretation
The belief that the vacuum is not empty has been supported by many people from long ago. It goes back, at least, to the idea of the ether in the 19th century, that apparently was excluded by relativity theory. However, it reappeared with the development of quantum theory. Thus, for instance, de Broglie theory or the hydrodynamical interpretation of Schrödinger equation by Madelung suggest a vacuum not empty. This has led to many attempts to derive quantum theory, or at least the Schrödinger equation, from the existence of a subquantum fluid [2]. For instance, in a recent attempt by Sbitnev [9], the Schrödinger equation is deduced from two equations: continuity and Navier-Stokes. In the latter, the gradient pressure is slightly modified, the extra term describing a change of the pressure induced by a change of entropy.
In this article, I also assume that the vacuum contains several stochastic fields, which precisely correspond to the quantum vacuum fields. Here, I will study only one of them, namely the vacuum electromagnetic radiation. In the following, I show that the stability of matter compels us of, or strongly suggests, the existence of stochastic fields which may be identified with the quantum vacuum. We start discussing two relevant predictions of the vacuum electromagnetic field: the energy and size of atoms and the Casimir effect.

The Stability of Atoms Rests on Vacuum Radiation
Soon after the Rutherford experiment of 1911 that lead to the nuclear atom, Bohr proposed in 1913 a model that involved postulates contradicting classical electrodynamics. The common wisdom was, and still is, that the contradiction cannot be avoided. That it appears even for the most basic empirical fact, the stability of the atoms. However, this claim is flawed [10].
Indeed, if studied within classical electrodynamics, a hydrogen atom, consisting of one proton and one electron, cannot be stable if isolated. The reason is that an electron moving around the proton would radiate, and therefore the atom will lose energy until it collapses. However, the argument is not valid if there are many atoms in the universe, because if all atoms radiate, the hypothesis of isolation is not appropriate. It is more plausible to assume that there is some amount of radiation filling space. Then, every atom would sometimes radiate but other times it would absorb energy from the radiation, eventually arriving at a dynamical equilibrium. This may explain, at least qualitatively, the stability of the atom. The moral is that the matter and radiation of the universe cannot be treated independently, and the complexity of the universe compels us to treat the radiation as a background stochastic field. The electron of a hydrogen atom would then move in a random way around the nucleus. I propose that the probability distribution of electron positions is what the Schrödinger wavefunction provides via Born's rule.

Spectrum of the Vacuum Radiation
It is plausible that the statistical properties of the background radiation that we have assumed are homogeneous, isotropic and Lorentz invariant. The most relevant statistical property is the spectrum, S(ω), defined as the radiation energy per unit volume and unit frequency interval. It is the case that a spectrum compatible with said constraints must be proportional to the cube of the frequency, that is where h is a constant that we should determine in order to fit empirical results. Thus, we will identify h with Planck constant. A standard method to study the radiation field in free space is to expand it in plane waves (or in normal modes if it is enclosed in a cavity). In free space, the number of modes, N, per unit volume and unit frequency interval is Taking Equation (1) into account, the vacuum radiation field is equivalent to an energy per normal mode of the radiation. Equation (3) is just 1/2 the "quantum" of energy introduced by Planck in his pioneer derivation of the radiation law that gave birth to quantum theory. In the following, I will derive some consequences of the existence of vacuum radiation with spectrum Equation (1).

The Energy and Size of the Hydrogen Atom
Via an heuristic approach, it is possible to derive the typical sizes and energies of quantum systems governed by electromagnetic interactions. Let us consider the example of a hydrogen atom consisting of two particles, proton and electron, characterized each by the mass and the electric charge. The proton mass being much larger that the electron mass, we may study the atom assuming that the proton is at rest and the motion of the electron is such that the atom is in a dynamical equilibrium with radiation. In our study of the electron motion, it is plausible that the main interaction with the vacuum radiation takes place via those normal modes of the field that have frequencies close to those of the electron motion. Additionally, the mean kinetic energy of the electron should be close to half the average energy of those normal modes which have the greatest interaction with the atom. As the potential energy is twice the kinetic energy with the sign changed, in view of the virial theorem, the total energy should be the negative of the kinetic energy. Then, if the electron moved around the nucleus in a circle having energy E (i.e., with balanced emission and absorption of radiation), we might write the following equalities the latter corresponding to the condition of dynamical equilibrium with radiation. Of course, the motion is perturbed by the action of the vacuum fields, whence the electron motion would be very irregular, not circular, but it is plausible that Equation (4) might be roughly fulfilled on the average. Hence the energy and the size of the atom may be obtained removing the quantities v and ω from Equation (4), which leads to in rough agreement with the quantum prediction and with experiments.
In this example, we have used a heuristic approach, a rigorous stochastic treatment would be more lengthy because it should involve also the vacuum electron-positron field and possibly other fields, something that will not be studied in the present article (however, it is remarkable that the standard quantum formalism allows a relatively simple treatment). It is the case that the study of the vacuum electromagnetic radiation field interacting, via Maxwell-Lorentz laws, with electric charges or macroscopic bodies reproduces several quantum predictions. Indeed, extensive research on this line has been made, which is known as stochastic, or random, electrodynamics (SED). For reviews, see [2,11,12]. In fact, there are also cases where the SED predictions disagree with quantum mechanics (and experiments), a fact that we may attribute to the neglect of: (1) other vacuum fields, like electron-positron, and (2) the back action of the charges that would modify the vacuum radiation. A case where the SED treatment fully agrees with the quantum one is the Casimir effect that we briefly revisit in the following.

Stationary States of Charged Particles: Bohr Atomic Model
The existence of vacuum radiation suggests a physical picture for the energy spectra of quantum systems, not only for their ground states. In particular for the hydrogen atom, as is shown in the following.
The motion of a charged particle, say an electron, under the action of the vacuum radiation would be very irregular due to the action of the strong high frequency components of the field, see Equation (1). That is, the position may change dramatically in a short time interval, something similar to the Brownian motion. In contrast, some memory may be conserved for long times. The latter prediction may be illustrated by the fact that the mean velocity of a free particle does not change with time if only the force due to the vacuum radiation acts on it. Indeed, if the particle velocity at the initial time t = 0 is v, then at time T there will be a probability distribution of velocities with an average where means the average. The average force F(t) due to the vacuum radiation is nil because that radiation is assumed isotropic, whence all directions of the force would be equally probable. In contrast to Equation (6), when the particle is not free, the evolution of the mean velocity is involved because the actions of a given force and F(t) are not independent.
We conclude that the existence of a vacuum field implies that the instantaneous velocity is not well defined and cannot be measured. However, we might measure the mean velocity during a not too small time interval. Hence, a simultaneous measurement of position and velocity is not possible, which in quantum mechanics is quantitatively stated via the Heisenberg uncertainty relations. In spite of this, we may assume that when a particle follows a path close to classical, although suffering strong shaking, large deviations from the classical orbit might be scarce. Thus, in the hydrogen atom, some orbits of the electron that would be periodic according to classical mechanics would be relatively stable. These orbits have constant angular momentum (they are ellipses or in particular circles). It is not strange that these were the orbits quantized in the Bohr-Sommerfeld model of the atom. The idea was the basis of the "old quantum theory", where quantization was made in terms of action-angle variables. Of course, the old quantum theory is known to be a semi-classical approximation to modern quantum mechanics.
In the following, I will give arguments that may provide a physical picture for the atomic Bohr model, which rests on two celebrated postulates. The second one is just the assumption, proposed earlier by Planck, that the absorption of radiation takes place in the form of "quanta" with energy E = hω, when the radiation involved has an angular frequency ω. A heuristic derivation of this relation will be seen in Section 5.2 below, as due the interference between any given radiation and the vacuum field. In order to get a physical picture of the first Bohr postulate, we will study just circular orbits. We may assume that relatively stable orbits have discrete energies E n , n being an integer and n = 1 corresponding to the ground state Equation (5). In fact, the existence of a discrete set of (almost) stable orbits cannot be easily derived from our assumption of a real vacuum radiation, but if we accept this assumption, the full spectrum of energies of the atom may be derived as follows. We shall study transitions between two close orbits taking Equation (7) into account, that is where ω n+1→n is the frequency of the absorbed or emitted radiation. Now it is plausible to assume that the radiation frequency is related to the rotation frequency of the electron, whence we may write where the ω n+1→n is the frequency of an emitted or absorbed radiation and ω n , ω n+1 are rotation frequencies in the atomic states n and n+1, respectively. The latter may be related to the energies according via classical electrodynamics (see the former three Equation (4)), that is which provides the set of energies for the stable states in the Bohr model. An approximate solution of Equation (10) may be easily obtained when n >> 1. In fact, we may take the variable n as continuous and substitute the following differential equation for Equation (10), that is The solution of the differential equation is where the integration constant is fixed so that the ground state energy E 1 agrees with Equation (5). It may be seen that Equation (12) is also an approximate solution of Equation (10) and it is equivalent to Bohr´s first postulate which states that circular orbits with angular momentum L are stable if L = nh.
Actually, Equations (9)-(11) are not new, but similar equations have been used in the past. Indeed, the fact that the frequency of emitted or absorbed radiation agrees with the rotational frequency of the electron for orbits with large n is well known as a typical example of Bohr's correspondence principle. More recently, equations similar to our Equation (10) have been used in order to derive the spectrum of the hydrogen atom, for instance in an article by Oks and Uzer [13] which we comment on below.
Whether our approach is classical may be controversial, certainly the assumption of a vacuum radiation fulfilling Equation (1) is alien to classical physics, but other similar derivations cannot be labeled classical either. For instance, in the paper quoted above [13], the authors used Dirac's generalized dynamics formalism (DGD), where constraints may be included in the Hamiltonian. DGC is a classical mechanical approach, but it is not classical electrodynamical because the coupling of charged particles with the radiation field is not included. In the application to the hydrogen atom, the quoted authors take into account the electrostatic force between electron and nucleus, but not the full interaction that should include the radiation emitted by any charge according to Maxwell theory. Then, after a number of assumptions without any clear justification, amongst them the obviously non-classical Planck hypothesis, Equation (7), they arrive at a set of static solutions, i.e., fulfilling dr/dt = 0, so that r(t) = r 0 .
Static solutions are not only counterintuitive, but they violate the Earnshaw theorem, which states that according to Maxwell electrodynamics, no stable state exists for any system of charged particles at rests. In any case, the main purpose of our approach in this paper is to get physical pictures of quantum phenomena, not to make classical-like derivations.
The conclusion of this section is that classical electrodynamics combined with the assumption of a (vacuum) radiation field filling space suggests an intuitive picture for the quantization of the hydrogen atom that might be extended to other quantum systems.

The Casimir Effect
The Casimir effect consists of the attraction between two parallel perfectly conducting plates in vacuum. The force F per unit area depends on the distance l between the plates, a force confirmed empirically [14]. The reason for the attraction may be understood qualitatively as follows. In equilibrium, the electric field of the vacuum radiation (that we will label zeropoint field, ZPF) should be nil on any plate surface, otherwise an electric current would be produced. This fact constrains the possible normal modes of the radiation, mainly those having wavelengths λ l, but the distribution of high frequency (short wavelengths) modes would be barely modified by the presence of the plates. If we assume that an effective cut-off exists for λ ≥ Kl, then the decrease in energy of the ZPF in the space between plates becomes where A is the area of the plates and Equation (1) has been taken into account. The derivative of E/A with respect to the distance l agrees with Equation (15) if K 6. The rigorous SED derivation is similar to the quantum-mechanical calculation, just substituting stochasic averages for quantum vacuum expectations. It consists of determining the normal modes of the radiation when the plates are at a distance l and then to attribute a mean energy 1 2 hω to every mode. The energy diverges if we sum over all radiation modes, but the force per unit area is finite and it reproduces Equation (15). A regularization procedure is required in order to get the result [14]. The physical picture of the phenomenon is that the radiation pressures in both faces of each plate are different and this is the reason for a net force on the plate. The Casimir effect is currently considered the strongest argument for the reality of the quantum vacuum fields. For us, it is especially relevant because it provides an example of the fact that what matters is the difference between the radiation arriving at the two faces of a plate, rather than the total radiation acting on one side. A similar behaviour will be assumed for photocounters in Section 7.4.

The Probability Distribution of the Field Amplitudes
The spectrum Equation (1) does not characterize the background field completely, it is necessary to know relevant probability distributions. A standard method to determine the properties of radiation is to expand the field in normal modes, which may be defined in a fixed normalization volume V, usually a cube with side L. Eventually, we should go to the limit V → ∞. A usual expansion for the electromagnetic field starts with the vector potential, A(r,t), that may be written in the Coulomb gauge with rationalized units, as follows where k j is the wavevector, ω j the frequency and ε j the (linear) polarization vector of the normal mode labeled by j. The dimensionless complex quantities a j and a * j are named the amplitudes of the mode. Hence the electric, E, and magnetic, B, fields of the radiation may be obtained via The radiation energy in the volume V becomes whose associated mean energy fits in Equation (3), provided that the expectation value for every mode fulfills Of course, the number of normal modes diverges so that some regularization would be required for high frequencies, a problem also known in quantum field theory which we will comment on in Section 6.2.
The stochastic field is fully characterized by the joint probability distribution of the mode amplitudes, ρ a j , a * j , a distribution that should be compatible with Equations (19) and (1). These equations suggest that different modes are statistically independent whence we should write ρ a j , a * j as a product of probabilities of the modes. Finally, it is plausible to assume that the probability of every mode amplitude is Gaussian with random phases, that is the phase of a j is distributed uniformly in (0, 2π). These constraints lead to the following normalized joint probability distribution Hence we may get the expectation of any function of the vacuum electric and magnetic fields which, taking Equations (16) and (17) into account, can be reduced to an integral of a function f a j , a * j of the amplitudes weighted by the distribution Equation (20). That is As an example, let us consider the function of a single mode f = a n j a * m j . The integral is different from zero only if m = n due to the random phases and we get a n j a * m δ nm being Kronecker's delta.

Other States of the Field
Now let us study modifications of the vacuum radiation defined by the probability distribution Equation (20). The modified states would have a different probability distribution than the vacuum ones and they could correspond to what in quantum language are called excited states. The modification may consist of the addition of some radiation to the vacuum state. In particular, if the added field has a probability distribution of modes ρ 1 a j , a * j , uncorrelated with the vacuum ρ a j , a * j , then the total radiation will have a probability distribution that is the convolution of the distributions ρ and ρ 1 . In the following, we study a simple example in order to show that the addition of radiation to the vacuum may dramatically modify the field because we should add the amplitudes rather than the intensities.
Let us consider a single vacuum mode with amplitude a j and the addition of another sure (i.e., not random) amplitude b j . The distribution of the total mode amplitude, will be It may be realized that the expectation of the intensity in the mode increases by the intensity of the field added, that is The phases are also modified, i.e., they are no longer uniformly distributed. Another interesting example is the state produced by a change of the distribution of phases in one or several modes. For instance, we may replace the uniform phase distribution in one of the modes, this giving a squeezed vacuum state of light. If, in addition to the phase change, the intensity is also increased, we would have a usual squeezed state. We will not study these possibilities further on.
I shall point out that not all density operators (or state vectors) that are assumed to represent states of the radiation in the quantum (Hilbert space) formalism may correspond to states in our stochastic interpretation. An obvious requirement is that the joint density function of mode amplitudes is positive in order that it may be interpreted as a probability distribution. I propose that quantum "states" not fulfilling this condition are not physical states. In particular, this is the case for the so-called photon number states consisting of a fixed number of "photons". I stress again that in the approach supported in this article, photons are just mathematical objects useful for calculations, not physical states in general. I believe that the demand that all quantum states (e.g., those with an integer number of photons) are physical states makes it impossible to get an intuitive picture of the (quantum) radiation field.

The Particle Behaviour of Light
In this section, we shall show that the vacuum radiation, taken as a stochastic field, provides hints for a realistic interpretation of the particle behaviour of light.

What Is a Photon?
Maxwell theory establishes that light consists of electromagnetic waves. However, this view was allegedly superseded by the proposal that light consists also of particles, later named photons. The wave-particle behaviour is the main mystery of quantum mechanics, in the words of Feynman, and it prevents a clear understanding of the theory. In the following, I provide qualitative explanations for some examples of the particle behaviour of light within Maxwell theory. That behaviour may be understood as being due to the existence of the vacuum stochastic radiation studied in the previous section.
In the year 1900, Planck assumed that energy exchanges between matter and light take place in discrete amounts ("quanta") of energy related to the frequency by Five years later, Einstein went further, postulating that light itself consists of particles with energy hω, whence he derived the law of the photoelectric effect. In fact, the Planck assumption Equation (25) is sufficient to derive that law, without the stronger Einstein postulate. Assuming that when monocromatic light with frequency ω arrives at an appropriate material only a quanta hω may be absorbed at a time, a part E 0 used to extract an electron and the rest to supply it kinetic energy E k . Then we have which is the law of the photoelectric effect. The constraint that radiation may be absorbed only in amounts fulfilling Equation (25) is the first example of particle-like behaviour, that we will explain qualitatively in Section 5.2.
In the celebrated 1916 article about the absorption and emission of radiation, Einstein arrived at the conclusion that the radiation emitted by an atom possesses well defined momentum, in his words it appears in the form of radiation needles. The two commented claims by Einstein led to the popular belief that light consists of particles (photons) with definite energy and momentum each. Compton experiments in 1923-1924 are commonly viewed as a confirmation of that belief. A semi-quantitative explanation of the radiation needles will be provided in Section 5.3.
More recently, experiments have been performed in optics that dramatically exhibit wave-particle behaviour of light. I will comment on them in Section 5.4.

Understanding Quanta: Discrete Energy Exchanges
Firstly, I point out that the absorption of light in the form of localized spots in a photographic plate or clicks in a photodetector are not valid arguments for the particle behaviour of radiation. In fact, the former is caused by the granular (atomic or molecular) nature of photografic plates. The latter derive from the fact that photocounters are manufactured so that they click when the radiation arriving during a detection time surpasses some threshold, which is compatible with light being continuous (waves). For a model of detector see Section 7.4.
The absorption of light in discrete amounts, fulfilling Equation (25), may be understood as follows. Let us consider a light signal with wavevector k 0 that arrives at a material having weakly bound electrons. The vacuum radiation may be described in terms of plane waves as in Equation (16), and we are interested in those waves with wavevectors k near k 0 . From time to time, it may happen that several of these waves have phases close to the incoming signal, whence they will interfere constructively giving rise to a unusually large intensity during some time T. In this case, a transfer of energy to the material will be more probable, for example an electron may be ejected. We may identify T with the coherence time of radiation consisting of the signal plus the vacuum radiation able to interfere constructively with it. The question is how much energy E may be transferred.
The wavevectors k of the vacuum field effective for the transfer of energy should be close to k 0 in order that interference takes place. That is |k − k 0 | << |k 0 | ≡ k 0 whence k may differ but slightly from k 0 , either in modulus or direction or both. Thus we may write where (k − k 0 ) ⊥ means the component of k − k 0 perpendicular to k. It is plausible to identify ω = k 0 /c, θ δω/ω and T π/δω. We need the area A effective for absorption, that may be estimated as follows where L = Tc is the coherence length. The effective intensity (energy per unit area per unit time) should be where S(ω) is the spectrum (energy per unit volume and unit frequency interval). Thus we get for the absorbed energy S(ω) may be of order the vacuum field spectrum given in Equation (1), which leads to in rough agreement with Equation (25).

Radiation Needles and the Compton Effect
An interpretation of the needle radiation that appears in the emission of light by atoms is as follows. In our stochastic interpretation, the emission is not spontaneous but induced by the vacuum field (or zeropoint field, ZPF). Then let us assume that in a fluctuation a strong plane wave of the ZPF with frequency ω arrives at an atom and it happens that ω is also one of the possible frequencies for emission from the excited atom. Then the arriving plane wave component of the ZPF may induce the emission of radiation with the same frequency and phase as the incoming wave. Thus, the emitted radiation should correspond to the addition of the amplitudes (not the intensities) of the incoming plane wave plus the emitted spherical wave. The frequencies being equal there would be interference and it is not difficult to show that it will be constructive in the forward direction and mainly destructive in all other directions.
More quantitatively, the outgoing energy will be concentrated within the region where the phase difference is small. The boundary of that region is roughly defined by the following relation with the half angle, θ, as seen from the atom and the distance, d, that is where λ is the wavelength. If we take d to be coherence length of the emitted "photon", for typical atomic emissions we have d ∼ 1 m, λ ∼ 1 µ, so that θ ∼ 10 −3 . This fits with Einstein´s proposal of "needles of radiation" and, in addition, it explains the random character of the direction of emission. In our interpretation, the stochastic character of the ZPF is the cause of the randomness. This provides the picture of a localized photon as a concentration of radiation energy that nevertheless has a frequency relatively well defined. Furthermore, that frequency is plausibly related to the emitted energy by Planck equation (25), taking the arguments of the previous section into account. In any case, the coherence time of the radiation needle cannot be larger than the lifetime of the atomic state.
We may apply that photon model to the case of an atomic cascade where two photons are emitted within a short time interval. Then the picture that emerges is the existence of two "needles of radiation" moving in different directions. In particular, if both the initial and the final state of the atom have zero spin and the photons are emitted in opposite directions, then the angular momenta of the two photons should be opposite by angular momentum conservation, whence they will be strongly correlated in polarization. The quantum formalism predicts that they will be maximally entangled, but I will not provide an interpretation of photon entanglement at this moment, see below Section 7.5. The polarization correlation will diminish if the photons are emitted at an angle smaller than 180 • , and this causes that no Bell inequality may be violated in experiments using photon pairs from atomic cascades [15]. Several atomic cascade tests of the Bell inequalities were performed in the decade 1975-1985.
As another example, I propose a semi-quantitative model for the Compton effect. As is well known, Compton´s was the experiment that the scientific community accepted as the final proof of the existence of photons. The experiment is usually understood as a collision between one photon of X-ray, frequency ω 1 , and one electron, giving rise to another photon with smaller frequency, ω 2 , at an angle θ with the incident one and a recoil electron. Indeed, the (relativistic) kinematics may be explained assuming that the incident and outgoing photons have energies hω 1 and hω 2 , respectively, and the electron is initially at rest. Quantum electrodynamics gives a quantitative account of the phenomenon, including the cross section of the process, but it does not offer an intuitive picture. On the other hand, there have been several attempts at a semiclassical explanation that I will not revisit here.
A stochastic interpretation might be achieved if we substitute radiation needles for photons. A rough model is as follows. Let us consider an incoming monochromatic X-ray beam with frequency ω 1 . By the arguments leading to Equation (27), we may assume that the beam contains radiation wavepackets with energy hω 1 and momentum hω 1 /c. From time to time, a large fluctuation of the ZPF may cross at an angle θ the incoming X-ray beam in a region where there are weakly bound electrons. If the ZPF fluctuation has an appropriate frequency, it could interfere with the radiation of the X-ray beam producing a concentration of energy in a direction at an angle θ 1 < θ, which may accelerate one electron in that direction. The electron will radiate with energy and momentum determined by the conservation laws.

The Wave-Particle Behaviour of Light in Optics
In quantum optics, the experiments may be usually interpreted in terms of light waves, the particle behaviour being apparent only in photodetection. Detectors will not be studied here in detail (see Section 7.4) but we may plausibly assume that the particle behaviour of light in detection is usually related to the corpuscular nature of atoms, or electrons in detectors. However, there are cases when this explanation is not sufficient or not appropriate. We will study two examples: anticorrelation after a beam-splitter in the following and entangled photon pairs in Section 7.4.
A simple beam-splitter (BS) may just consist of a slab of transparent material. If a light beam impinges at a point of the slab, a part of the beam intensity is transmitted and another part reflected. The relative intensities of the outgoing fields depend on the refraction index of the material and the angle of incidence. In this way we have an elementary beamsplitter with one incoming channel and two outgoing channels. Actually, we have another incoming channel via a light beam arriving in the opposite side of the slab that gives rise to two new outgoing channels. In practice, the plate is used so that the transmitted light from the first incoming channel is superposed to the reflected light of the second incoming channel, and the light reflected from the former is superposed to the light transmitted from the latter. In this way, we would have two incoming channels and two outgoing ones. In practice, beam-splitters may be more sophisticated, e.g., involving piles of plates (used for instance in many tests of Bell inequalities). Sometimes, the BS polarizes the light, thus acting as a polarizer or a polarization analyzer.
In the following, I study in more detail a balanced non-polarizing BS. If the field amplitudes of the incoming beams are E 1 and E 2 , then the amplitudes in the outgoing channels will be The imaginary unit i is appropriate if we treat the electromagnetic fields in the complex representation, as we will do throughout this article. From Equation (29), it is obvious that the energy is conserved in the BS. In fact, the sum of intensities in the incoming channels equals the similar sum in the outgoing ones, that is In the experiment studied in the following, the field arriving at one of the incoming channels will be a signal and a vacuum field at the other one.

Anticorrelation-Recombination Experiment
A dramatic exhibition of the wave-particle behaviour of light is the anticorrelationrecombination experiment [16]. A weak radiation signal, allegedly consisting of well separated photons, is sent to one of the incoming channels of a balanced beam splitter BS1, and two photodetectors, A and B, are placed in front of the outgoing channels. No coincidences are observed, which shows the corpuscle behaviour of light: a photon is not divided, but goes to one of the detectors. If the detectors are removed and the two outgoing radiation beams are recombined via the two incoming channels of another beam splitter BS2, then the detection in one of the outgoing channels depends on the length difference between the two paths from BS1 to BS2, this being a typical wave behaviour.
Our stochastic intepretation is as follows [17]. If we assumed that the vacuum quantum fields were not real fields, then only the signal field E entering BS1 should produce outgoing fields, in every one of the two outgoing channels. However, if the vacuum fields are real, there is another (vacuum) field E 0 with similar frequency than the signal entering in BS1 via the second incoming channel and interference is produced. Hence the outgoing fields may be written Depending on the relative phases, one of the intensities may be large and the other one small, that is where φ is the relative phase of the fields E and E 0 . On the other hand, the vacuum intensities would be ideally I A0 = I B0 = I 0 = |E 0 | 2 .
If we assume that detection is roughly proportional to the part of the arriving intensity that surpasses the ZPF level, then with I = |E| 2 , I 0 = |E 0 | 2 , we obtain This result shows that for weak signals, that is when I is not much greater than I 0 , the coincidence detection rate is inhibited, that is R AB << R A R B , as observed in the commented experiment. (We define the rate as a dimensionless probability of detection per time window). In contrast, for macroscopic (classical) light, we have I >> I 0 and the ratio would be Hence, if the radiation has fixed (nonfluctuating) intensity, like the laser light, then meaning that the detections are uncorrelated. On the other hand, for chaotic light, where the field fluctuations are Gaussian, we would have meaning that the detections by Alice and Bob are positively correlated. The change from r = 1 to r = 2, a phenomenon known as "photon bunching", has been interpreted as a quantum effect attributed to the Bose character of photons. In our stochastic interpretation, it is the consequence of the correlated fluctuations derived from the Gaussian character of chaotic light.
In the recombination process, the fields Equation (30) will enter BS2, giving rise to the following intensity in one of the outgoing channels where θ is the relative phase due to the different path lengths. The device used in the experiment [16], consisting of two beam splitters and two mirrors in between, is called a Mach-Zehnder interferometer. The detection rate is proportional to meaning that a 100% visibility may be achieved. Thus, we have a wave explanation for one of the most dramatic particle behaviours of light, the anticorrelation after a beam splitter. The anticorrelation is usually named "photon antibunching" and it is considered a typically quantum phenomenon, that cannot be explained by classical theories. Of course, it can be explained if we do assume that the vacuum fields are real stochastic fields. The evolution of these fields is classical (Maxwellian), but the assumption of real vacuum fields is alien to classical physics. I stress that the Planck constant appears fixing the scale of the vacuum fields, see Equation (20).

The Vacuum in the Hilbert Space Formalism
The existence of radiation in vacuum, even at zero Kelvin, appeared for the first time in Planck's second radiation theory of 1912. This zeropoint energy of the electromagnetic field (ZPF) was disregarded because it is divergent, although the consequences of its possible reality were initially explored by several authors, including Einstein and Nernst [14]. Soon afterwards, the ZPF was forgotten due to the advent of the Bohr atomic model in 1913, that opened a new route for the "old quantum theory" which was followed by the mainstream of the community.
The ZPF reappeared in 1927 when Dirac quantized the electromagnetic field starting from an expansion in normal modes, see Equation (16), then promoting the amplitudes to be operators â j ,â † j in a Hilbert space. These operators are usually named "annihilation and creation operators of photons" in the mode, and fulfil the commutation ruleŝ where δ jk is the Kronecker delta. In the formalism, the Hamiltonian operator of the field may be written as a sum over normal modes, that is where ω j is the (angular) frequency of the normal mode j. The energy is given by the vacuum expectation of the Hamiltonian, that is the former expectation being nil. The result corresponds to a mean energy 1 2 hω j per mode, that fits in the arguments of Section 3.2 above.
Around 1947, two discoveries reinforced the hypothesis of the quantum vacuum fields, namely the Casimir effect and the Lamb shift. The former has been discussed in Section 3.4. Lamb and Retherford observed an unexpected absorption of microwave radiation by atomic hydrogen, that was soon explained in terms of the interaction of the atom with the quantized electromagnetic field, which involves the vacuum radiation (ZPF). Indeed, Willis Lamb has claimed to be the discoverer of the ZPF by experiment [18]. The finding led, in a few years, to the development of quantum electrodynamics (QED), a theory that allows predictions in spectacular agreement with experiments, and it was the starting point for the whole theory of relativistic quantum fields. The success of QED rests on renormalization techniques, where it is taken for granted that particles, like electrons, are dressed with "virtual fields", making their physical mass and charge different from the bare quantities. In my view, the assumptions behind renormalization are actually a reinforcement of the reality of the quantum vacuum fields, although people avoid commitment with that conclusion using the word "virtual" as an alternative to "really existing".

The Problem of the Vacuum Energy Divergence
The main problem with the ZPF is that the total energy density in space diverges when we sum over all (infinitely many) modes. The standard solution to the difficulty is to write the annihilation operators to the right. For instance, in Equation (34), to substitute 2â † jâ j for a † jâ j +â jâ † j . Then, the vacuum expectation of the Hamiltonian would not be Equation (35) but only its first term, which gives 0. Thus the normal ordering is equivalent to choosing the zero of energies at the level of the vacuum. It provides a practical procedure useful in quantum-mechanical calculations, but for many authors it is not a good solution. They see it as an "ad hoc" assumption, just aimed at removing unpleasant divergences. For those authors, the ZPF is a logical consequence of quantization and the solution to the divergence problem should come from a more natural mechanism.
In laboratory physics, where gravity usually plays no role, the possible divergence of the quantum vacuum energy is not too relevant a question. In fact, their possibly huge, or divergent, energy may be usually ignored choosing the zero energy at the vacuum level as said above. However, this choice is no longer innocuous in the presence of gravity because, according to relativity theory, energy gravitates whence a huge vacuum energy should produce a huge gravitational field. Therefore, the possible existence of a vacuum energy is a relevant question in astrophysics and cosmology.
A solution is to assume a cancelation between positive and negative terms in the vacuum energy. Indeed, it is the case that the vacuum electromagnetic field, viewed as a stochastic field, contributes a positive energy and we may extend this assertion to all vacuum contributions of Bose fields. In fact, we may assume that these fields can also be expanded in normal modes and the vacuum should consist of a probability distribution of the amplitudes similar to Equation (20). The vacuum contribution of Fermi fields is quite different. We do not have a clear stochastic interpretation, but there are arguments suggesting that they contribute negative energy. The main reason is that the operators representing amplitudes of the field obey anticommutation, rather than conmmutation, rules. Thus it is plausible to assume that the Bose positive energy of the vacuum is cancelled by the Fermi negative energy. In this article, I will not discuss the problem further on, but point out that the vacuum fields might have the clue for understanding relevant problems in astrophysics, like the nature of dark energy [19] or dark matter [20] and the collapse of stellar objects [21]. A summary appears in chapter 7 of [2].

Weyl Transform and Wigner Function in Quantum Mechanics
After one century of successes, we know that the Hilbert space formalism of quantum mechanics (HS in the following) is extremely efficient in order to deal with the microworld. As the main assumption in this article is that the quantum vacuum fields are real stochastic fields, I believe that the HS formalism could be understood as a disguised treatment of some peculiar random variables. Consequently, there should be a formalism alternative to HS where the interpretation in terms of random variables is more clear. A formalism exists that might do the job. It goes back to a proposal by Hermann Weyl [22] in 1928, known as the Weyl transform. Before discussing the use of the transform for the study of the vacuum radiation, I will make a digression about the application to mechanics of particles.
In fact, Weyl proposed his transform for systems of particles with the design of getting equations involving quantum operators,x j andp j , from classical equations involving positions, x j , and momenta, p j , of particles. That is, his purpose was a quantization procedure of nonrelativistic classical mechanics [23,24]. Our aim in this article is the opposite, namely to get classical-like probabilistic equations from the quantum equations in order to get a realistic interpretation, i.e., a picture of reality. Therefore, we are interested in the inverse Weyl transform that reads It provides a function f x j , p j in phase space for any trace class operatorf in the Hilbert space. In particular, whenf is the density operator representing a quantum state, then Equation (36) gives the Wigner function of the state in the form of a (pseudoprobability) distribution [23][24][25].
The relevant question is whether Equation (36) provides a realistic interpretation of the quantum states, and the answer seems to be negative. In fact, the Wigner function is not positive in general, whence it cannot be interpreted as a probability distribution. We might assume that quantum density operators represent physical states only when its Wigner function is non-negative definite, but this is too strong a restriction. Actually, I believe that the Wigner function cannot be interpreted as a phase space distribution, in spite of it being a function of positions and momenta. Indeed, physical quantum particles cannot be just particles if we assume that the vacuum fields are real. For instance, a physical electron is not a small (or pointlike) particle but a more complex system consisting of a particle plus the modified vacuum fields interacting with it. Thus, we might measure the position of the particle, at least with some uncertainty whence the probability distribution of positions ρ(x) has a meaning and we may assume that it is obtained via the marginal of the Wigner function, that is Indeed, ρ(x) agrees with the standard quantum prediction for any quantum state of the particle. However, the instantaneous velocity would be highly irregular due to the interaction with the vacuum fields whence the instantaneous momentum of the particle is meaningless, or at least not measurable. We might determine the mean velocity (or momentum) during some not too small time interval. Thus, the following marginal of the Wigner function σ(p) = W(x, p)d 3 x≥0, may correspond to information about the expected future motion of the physical particle. Indeed the distribution σ(p) agrees with the quantum "probability distribution" of momenta. These arguments suggests why the Wigner function should not be understood as a probability distribution in phase space. This fits in the fact that (quantum) Heisenberg uncertainty relations forbid the existence of a simultaneous well defined position and momentum.
In summary, we cannot assume that the quantum state of a particle, say an electron, can be defined by position and momentum as is the case in classical mechanics. As stated above, a physical electron is not a point (or small) particle, but a complex system consisting of a cloud of electrons and positrons interacting with electromagnetic interactions with the vacuum radiation (ZPF) and possibly other fields. I will discuss the subject again in Section 8.1.
Approaches different from Weyl-Wigner have been proposed for the characterization of quantum states. For instance, a new formulation of conventional quantum mechanics where the quantum states are identified with probability distributions has been proposed [26]. The invertible map of density operators and wave functions onto the probability distributions describing the quantum states in quantum mechanics is constructed both for systems with continuous variables and systems with discrete variables by using the Born rule and a suggested method of dequantizer-quantizer operators. Examples are qubits (spin-1/2, two-level atoms), harmonic oscillator and free particles. Schrödinger and von Neumann equations, as well as equations for the evolution of open systems may be written in the form of linear classical-like equations for the probability distributions determining the quantum system states. In my view, this approach [26] is a formal (mathematical) construct that does not lead to any plausible picture of the reality.

The Weyl-Wigner Formalism in Quantum Field Theory
The Weyl transform may be trivially extended to the radiation field provided we interpretx j andp j to be the sum and the difference of the so-called creation,â † j , and annihilation,â j , operators of the j normal mode of the field [27,28]. That iŝ Here h is the Planck constant, c is the velocity of light and ω j the frequency of the normal mode. In the following, I will use units h = c = 1, but these parameters will be restored in some cases. For the sake of clarity, I shall represent the operators in HS with a 'hat', e.g.,â j ,â † j , and the amplitudes in the Weyl formalism without a 'hat', e.g., a j , a * j . The transform provides an operatorf , written in terms of operatorsâ j andâ † j , from any function f of the complex amplitudes a j and a * j as followŝ The transform is invertible that is The transform is linear, that is if f is the transform off and g the transform ofĝ, then the transform off +ĝ is f + g. Other properties may be seen in the references [27,28].
Getting the field operators associated with given field amplitudes or to obtain the amplitudes from the operators is straightforward taking Equations (37) or (38) into account. Particular instances are the followinĝ , n an integer.
I stress that the quantities a j and a * j are c-numbers and therefore they commute with each other. As said above, it is standard in HS to call â j and â † j the annihilation and creation operators of photons, respectively. We will use these names here for clarity, although our study will not introduce "photons" at any stage. The first two Equation (39) mean that the Weyl transform Equation (38) in expressions linear in creation and/or annihilation operators just implies "removing the hats". However, this is not the case in nonlinear expressions in general. In fact, from the latter two Equation (39) plus the linearity property, it follows that for a product in the Weyl formalism the HS counterpart is where the subindex sym means that the term is actually a sum of products of the operators involved, written in all possible orderings, divided by the number of terms. Hence, the stochastic field amplitudes corresponding to a product of field operators may be easily obtained transforming firstly the product of operators into a sum of terms presenting a symmetrical order. This may be always achieved taking the commutation rules Equation (33) into account.
In analogy with particle mechanics, the Weyl transform Equation (33) of the density operator (or density matrix) representing a quantum state of radiation may be named the Wigner function of the state. In particular, Equation (20) is the Wigner function of the vacuum state. Therefore, I will name WW the formalism obtained from HS via the Weyl transform Equation (38). WW admits an interpretation in terms of random variables and stochastic processes provided we respect some constraints. In particular, the radiation Wigner function may be interpreted as a probability distribution only if it is non-negative.
The evolution of the free field given in Equation (16) may be interpreted saying that the amplitude of every normal mode fulfils The evolution of the field operators in HS, obtained from the Weyl transform, is quite similar. It is interesting that it may be derived in general via a commutator involving the HS Hamiltonian, as is well known. In the particular case of Equation (41) we have This illustrates the connection, via Weyl transform, between the classical (Maxwell) evolution Equation (18) and the HS Heisenberg equation of motion Equation (42) in a particular case. For the general treatment of evolution in the WW formalism, see the references [27,28].
Equations (37) or (38) are suited in order to transform observables, represented by density matrices in the HS formalism, into functions of the field amplitudes in the WW formalism. However, the density operators are not always written as functions of the operators â j ,â † j , and the Weyl transform requires a more sophisticated treatment as shown in the following.

States of Radiation in Weyl-Wigner and Hilbert Space Formalisms
The Weyl transform should be written for the total amplitudes of the modes, that for clarity we will label c j , c * j as in Equation (24) in order to avoid any confusion with a j , a * j used above for the particular case of the vacuum state. Thus we shall rewrite Equation (38) as follows (for a single mode) A relevant application of the Weyl transform is to get the operator,v, associated with the vacuum state in HS. This operator will be the Weyl transform of the stochastic vacuum distribution Equation (20).

I show from Equation (43) that the solution iŝ
where | 0 is named "vacuum state vector" that fulfilŝ 0 meaning the nul vector in the Hilbert space. We shall do the proof for a single mode, taking the Campbell-Haussdorf formula into account, that is valid if the operator Â ,B commutes withÂ and withB. Hence the trace involved in Equation (38) becomes If this is inserted in Equation (38) we get, for every mode, that agrees with Equation (20). Another example is the HS state corresponding to the WW state Equation (24). In this case, the distribution function f (c, c * ) is given by Equation (24) and we must find the associated density matrixf . In Equation (44), I propose that the solution iŝ where | s is defined bŷ by analogy with Equation (23). I point out that b is a complex c-number (or more properly a number times the unit operator in the Hilbert space) andâ is the standard annihilation operator fulfilling Equation (45). Then steps similar to those leading to Equation (47) give Inserting this in Equation (48) and performing the λ and µ integrals, we obtain Equation (24), that confirms that Equation (49) is indeed the HS density matrix representative of the state Equation (24). It is interesting that the vector state Equation (50) is named a coherent state of the radiation, characterized by the amplitude -b, which fulfils the equationâ | s = −b | s .
As pointed out above, not all density matrices that people assume to correspond to states in HS are actually physical states. In particular, this happens in all instances where the radiation Wigner function is not positive, e.g., all states with n photons, n = 0. On the other hand, all (positive) probability distributions of the amplitudes a j , a * j might be considered possible states of the (stochastic) radiation field, but it may be that many of them do not exist in nature and cannot be manufactured in the laboratory. In any case, I stress that the realistic interpretation of quantum mechanics that we are searching for does not require getting an intuitive picture of all states and observables assumed in HS (which is indeed impossible) but to understand actual or feasible experiments.

Expectation Values
Expectation values may be calculated in the WW formalism as follows. In the HS formalism they read Tr(ρM), or in particular ψ |M | ψ , whence the translation to the WW formalism is obtained taking into account that the trace of the product of two operators becomes That integral is the WW counterpart of the trace operation in the HS formalism. Particular instances are the following expectations that will be of interest later a j 2n ≡ dΓW 0 a j 2n = n! 2 n , a n j a * m k = δ jk δ mn a j 2n = n! 2 n , where W 0 is the Wigner function of the vacuum, Equation (20). This means that in the WW formalism, the field amplitude a j (coming from the vacuum) behaves like a complex random variable with Gaussian distribution and mean square modulus a j 2 = 1/2.
I point out that the integral for any mode not entering in the function M a j , a * j of Equation (52) gives unity in the integration due to the normalization of the Wigner function Equation (20). An important consequence of Equation (53) is that normal (antinormal) ordering of one creation and one annihilation operators in the Hilbert space formalism becomes subtraction (addition) of 1/2 to the field intensity in the WW formalism. The normal ordering rule is equivalent to subtracting the vacuum contribution as said above.

Entanglement and Bell Inequalities
In this Section 1, I comment on two related difficulties that allegedly prevent a realistic interpretation of quantum theory, namely the non-classical properties of entangled states and the empirical violation of Bell inequalities. I shall show that both difficulties may be removed if we assume that the quantum vacuum fields are real stochastic fields. In particular, I shall study the relevance of the vacuum electromagnetic field in quantum optics.

Entanglement
Entanglement is a quantum property that may be easily defined within the HS formalism, but the definition does not provide any intuitive picture. It appears in systems with several degrees of freedom when the total state vector of the system cannot be written as a product of vectors associated with one degree of freedom each. In formal terms, a typical entangled state is the following where 1 and 2 correspond to two different degrees of freedom, usually belonging to different subsystems that may be placed far from each other, and c mn are complex numbers. The essential condition is that the state Equation (54) cannot be written as a single product, that is the sum cannot be reduced to just one term via a change of basis in the Hilbert space. Entanglement appears as a specifically quantum form of correlation, which is claimed to be dramatically different from the correlations that appear in all other branches of science, including classical physics. The relevance of entanglement was stressed in 1935 by Schrödinger [29] 1935, who wrote that it is not one but the characteristic trait of quantum mechanics. He also pointed out the difficulty to understand entanglement with his celebrated example of the cat suspended between life and death. Indeed, if one assumes that quantum mechanics is complete, i.e., that a state-vector like Equation (54) represents a pure state, then a realistic interpretation is impossible because we are confronted with consequences in sharp contradiction with both the intuition and a well established pardigm, namely that complete information about the whole requires complete information about every part. In fact, we are compelled to believe that a state-vector like Equation (54) represents complete information about the state of the system, but incomplete information about every one of the subsystems. Indeed, according to quantum theory, the state of the first subsystem should be obtained by taking the partial trace with respect to the second subsystem, leading to the following density matrix (assuming all state-vectors normalized) The density matrix represents a mixed state, where the information is incomplete, that is we only know the probabilities, P m = |c m | 2 , for the first subsystem to be in the different states | ψ m (1) .
An important result is that entanglement is a necessary condition for the violation of Bell inequalities, as discussed in the following [30].

Bell Inequalities
It is common wisdom that any correlation between two events, say A and B, is either a causal connection or it derives from a common cause. There is causal connection if A is the cause of B or B the cause of A, and a common cause means that there is another event C that causes both A and B. In formal terms, we may write either A ⇒ B or B ⇒ A for causal connection, C ⇒ A and C ⇒ B for common cause. In 1965, John Bell allegedly proved that the said common wisdom is not true according to quantum mechanics. In fact, he derived inequalities [31] that he claimed to be necessary conditions for the existence of a common cause, and pointed out possible experiments where the inequalities would be violated.
Typical experimental tests of the Bell inequalities consist of preparing a system that produces pairs of signals, one of them going to an observer Alice and the other one to observer Bob. Alice may measure a dichotomic property a 1 on her signal with the possible results {0, 1}, and in another run of the experiment she may measure a 2 also with the possible results {0, 1}. Similarly, Bob may measure either b 1 or b 2 with the possible results {0, 1}. Alice and Bob may perform coincidence measurements of a j and b k . After many runs of the experiment with identical preparations of the system, Alice may obtain from the frequencies the single probability, P a j , that the result in her measurement is 1, and similarly Bob may get the probability P(b k ). They may also obtain the probability P a j b k that both results are 1 in a coincidence measurement. Then, the following Bell inequality [32] P(a 2 ) should hold true. The relevant fact is that quantum mechanics predicts violations of Bell inequalities in some cases. The contradiction has been named "Bell´s theorem": quantum mechanics is not compatible with local realism. Local realism is the assertion that all correlations in nature are either causal connections or derive from a common cause. The word "local" is introduced because a direct communication between Alice and Bob could produce results violating Equation (56) which would invalidate the test. The possible communication is named local if any possible information travels with velocity not greater than the speed of light, whence locality should be better named "relativistic causality". As a consequence, the crucial experiments must be performed so that coincidence measurements by Alice and Bob take place both within a time window ∆t smaller than the distance between their measuring devices divided by the velocity of light, that is with spacelike separation in the sense of relativity theory.
Many experiments have been performed in the last 50 years in order to test Bell inequalities, with results that generally agree with quantum predictions, but there are loopholes for the proof that local realism is refuted. In particular, in most of the performed experiments, the spacelike separation is not guaranteed. The reader should consult the vast literature on the subject. See e.g., [30,33].
In the last decades, most tests of the inequalities have used entangled photon pairs produced via spontaneous parametric down conversion (SPDC). In Section 7.6, I shall analyze a representative test similar to those providing for the first time the loophole-free violation of a Bell inequality [34,35]. The empirical violation is interpreted as a refutation of local realism, but I will show that the commented experiments may be interpreted as locally realistic even if a Bell inequality is violated. That is, I will prove that the Bell inequalities are not necessary conditions for local realism, contrary to common wisdom.

Spontaneous Parametric Down Conversion (SPDC)
SPDC has been the main source of entangled photon pairs from about 1980. In the following, I will study, within the quantum Hilbert space formalism (HS), the SPDC process and a simple experiment involving entangled photon pairs. I shall work in the Heisenberg picture where the obvervables evolve, see Equation (58) below, but the state vector is fixed, in our case the vacuum state | 0 . In Section 7.4, I shall pass to the WW formalism, which suggests an interpretation of SPDC experiments in terms of random variables and stochastic processes without any reference to photons.
SPDC is produced when a pumping laser impinges a crystal possessing nonlinear electric susceptibility. Radiation with several colors may be observed going out from the opposite side of the crystal. By means of appropriate apertures, two beams of the radiation may be selected, which in quantum language consist of a set of entangled photon pairs, one photon of every pair in each beam.
The HS theory of the process is as follows, with the simplification of taking only two radiation modes into account, having amplitudesâ s ,â i . Avoiding a detailed study of the physics inside the crystal, that may be seen elsewhere [36,37], we might describe the phenomenon with a model interaction Hamiltonian [38], that iŝ when the laser is treated as a classically prescribed, undepleted and spatially uniform field of frequency ω P . The interaction of the pumping laser with the incoming vacuum mode,â s , within the crystal produces a new field with amplitude Aâ † s , named "signal". If the beams have been adequately chosen, that signal travels superposed to the vacuum fieldâ i after exiting the crystal. Similarly, the vacuum fieldâ i produces a field Aâ † i , named "idler", that travels superposed to the vacuum fieldâ s .
As a result, the radiation fields at the crystal exit may be represented bŷ where the wavevectors k s and k i form a finite angle amongst them.The parameter D is proportional to the interaction coefficient A Equation (57) and it depends also on the crystal size. In practice, it fulfils |D| << 1. The following equality holds for the frequencies of the selected beams which is usually interpreted assuming that the signal and idler photons, with energies hω s and hω i , were the result of the division of one laser photon with energy hω P . That is Equation (59) is viewed as "energy conservation" in the splitting of laser photons. However, I interpret it as a condition of frequency matching, induced by the nonlinear susceptibility, with no reference to photons. In the following, I will ignore the spacetime dependence, whence Equation (58) will be writtenÊ These equations are the formal representation of entangled photon pairs in the Heisenberg picture of the HS formalism, and they show a strong correlation between the fieldsÊ + A andÊ + B . As a simple application, I shall derive the quantum prediction for an experiment that consists of measuring the single and coincidence detection rates when the beams with fields Equation (60) arrive at Alice and Bob detectors, respectively. It is convenient to get the quantum prediction in terms of the probability of detection, P, in a given time window. Thus, if we divide the unit of time in a number n of windows, the detection rate would be R = nP. In the quantum HS formalism Alice, single detection probability is given by the following vacuum expectation (to order O |D| 2 ) where only one out of four terms contributes, but I have written two of them explicitly for clarity, and similar for Bob. It is easy to prove that the spacetime factors, explicit in Equation (58), cancel. The quantum prediction for the coincidence detection probability is In our case, taking into account thatÊ + i andÊ + s commute, both terms are equal and we have The quantum predictions Equations (61) and (62) show that the correlation is the maximum possible, that is the coincidence detection rate equals the single rate of either Alice or Bob. In contrast, if there was no correlation we should have P AB = P A P B = |D| 4 << |D| 2 . In any case, single and detection probabilities obviously must fulfil P AB ≤ P A and P AB ≤ P B . In actual experiments, the predictions for real detectors should take into account the detection efficiency. If it is η < 1 equal for both detectors the prediction would be P A = P B = η|D| 2 , P AB = η 2 |D| 2 , confirmed in actual experiments.
Entanglement of the form Equation (54) may be exhibited if we pass to the Schrödinger picture, where the evolution goes in the state. The appropriate representation of the joint quantum state of the radiation at Alice and Bob detectors is which may be interpreted saying that the state of the radiation is entangled and consists of two terms Alice and Bob having one photon each in the second term and none of them having photons in the first term. I stress that in the HS of quantum theory, Equation (63) represents a pure state, not a statistical mixture. It cannot be interpreted as a probability |D| 2 of having two photons and a probability 1 − |D| 2 of no photons. IfN A andN B are the photon number (operator) observables for Alice and Bob in a given time window, the detection single probability will be and a similar for P B . From the two terms of | ψ Equation (63) we get four terms for the expectation but three of them do not contribute. The coincidence probability also consists of four terms, but only one contributes, namely In summary, Equation (63) exhibits entanglement between the vacuum and the twophoton state, as has been pointed out [39].

Stochastic Interpretation of the Correlation Experiment
The quantum-mechanical prediction for the experiment commented in the previous section may be easily worked in the WW formalism. The Weyl transform of the field operators Equation (60) are Vacuum expectation in HS corresponds in WW to an average weighting by the vacuum probability distribution Equation (20). However, the detection probabilities in WW cannot be obtained just taking averages of Equation (64), but should be obtained from the Weyl transform of the HS vacuum expectations. For Alice single detection probability, the Weyl transform of Equation (61) is where Equation (53) has been taken into account. I ignore two terms that do not contribute and are not relevant for the interpretation, similarly for Bob detection probability P B . The result agrees with the prediction using HS, as it should because the WW formalism is an equivalent form of quantum theory for the radiation field. The different signs in front of 1/2 in the two terms of Equation (65) may seem strange. Of course, they appear in the Weyl transform of Equation (61) because the former comes from the vacuum expectation ofâ † sâs which is zero but the latter from the vacuum expectation ofâ iâ † i which is unity. However, in the WW formalism we are working with commuting amplitudes and the different ordering should not make any difference. We may understand intuitively the reason for the signs taking into account that the second term of Equation (65) corresponds to the signal (it contains |D| 2 ) but the first term corresponds to vacuum modes that should not contribute to the detection and therefore should be removed. The addition of 1/2 in the signal term effectively multiplies the detection probability by 2. This is more difficult to understand intuitively and I will not comment further.
In order to derive the coincidence detection probability, P AB , we might proceed translating to the WW formalism the calculation made in Section 7.3 using the HS formalism, which led to Equation (62) (see [28]). However, I will not do that but make a direct stochastic derivation of single and coincidence probabilities, which may allow greater understanding of the physics of the experiment. I will start from the fields Equation (64) and proceed using classical laws and plausible assumptions for the correlations.
I shall start by proposing a model of detection. According to our assumptions, any photodetector in free space is immersed in an extremely strong stochastic radiation, infinite if no cut-off existed, see Equation (20). Thus, how might we explain that detectors are not activated by the vacuum radiation? Firstly, the strong vacuum field is effectively reduced to a weaker level if we assume that only radiation within some (small) frequency interval is able to activate a photodetector, that is the interval of sensitivity (ω 1 , ω 2 ). Actually, the frequency selection is quite common in radiation detection, for instance, when tuning radio or TV. The theoretical explanation of this fact is easy, that is detection takes place via resonance with some oscillator having the same characteristic frequency than the radiation to be detected. For instance, an appropriate electric circuit in case of radiowaves or a molecular resonator for visible light (e.g., molecules with a appropriate frequency of excitation inside the elements of color vision in our retina).
However, the problem is not yet solved because the signals involved in experiments may have intensities of the order of vacuum radiation in the said frequency interval, whence the detector would be unable to distinguish a signal from ZPF noise. Our assumption is that a detector may be activated only when the Poynting vector (i.e., the directional energy flux) of the incoming radiation is different from zero, including both signal and vacuum fields. To make a trivial comparison, we live immersed in air but its pressure is almost unnoticed except when there is strong wind producing an unbalanced force that pushes us towards a given direction.
Thus, a plausible hypothesis is that light detectors possess an active area, the probability of a photocount depending on the integrated energy flux crossing that area during some activation time, T. The assumption allows understanding why the signals, but not the vacuum fields, activate detectors. Indeed, the ZPF arriving at any point (in particular the detector) would be isotropic on average, hence the associated energy flux integrated over a large enough time would be very small because fluctuations are averaged out. Therefore, only the signal, which is directional, would produce a large integrated energy flux during the activation time, thus giving rise to photocounts. A problem remains because the integrated flux would not be strictly zero. Indeed, the integrated flux during a time integral T, divided by T would go to zero when T → ∞. Hence, we may predict the existence of some dark rate induced by vacuum fluctuations even at zero Kelvin. In summary, we are assuming that photocounts are not produced by an instantaneous interaction of the radiation field with the detector, but the activation requires some time interval, a fact well known by experimentalists but sometimes ignored by theoretitians.
After that, I will obtain the detection probabilities as averages of intensities derived from the fields Equation (64). I will assume that the detection probability is proportional to the mean intensity arriving at the detector, taking the proportionality coefficient as a unit for simplicity. For the single detection by Alice, we get the detection probability as the average of the intensity arriving at her detector, that is According our previous analysis, we should use time averages but we may assume that they are equal to ensemble averages, a kind of ergodic property. Equation (66) has two intensity contributions, the former I A coming from the signal and the latter I ZPF A from the ZPF. On the other hand, if the laser pumping on the crystal was switched off, then the total intensity arriving at the Alice detector should be zero on the average, that is where I A0 is the intensity arriving at the detector from the source in place of the signal when there is no pumping. The intensity I A0 comes from the vacuum fields and it may be derived from Equation (64) putting D = 0. From Equation (67) the probability Equation (66) becomes where the second term correspond to the ZPF subtraction. This means that we should not expect any detection if there is no signal, a quite plausible result. The radiation intensities may be obtained form the fields taking Equation (64) into account, as follows Equations (66) to (68) lead to where we take into account that The coincidence detection probability for a given time window will be the average of the product of the field intensities whence the detection probability per time window is obtained as follows P AB = I A + I ZPF As in the derivation of Equation (69), the detection probability P AB should be zero when the pumping is off, whence we get From Equations (67), (70) and (71) and the plausible assumption that I ZPF A and I ZPF B are uncorrelated with the signals we obtain P AB = I A I B − I A0 I B0 − I A I B0 − I A0 I B + 2 I A0 I B0 . (72) The average of single intensities in Equation (68) may be easily obtained taking the vacuum distribution Equation (20) into account. We get For the average of products of intensities we have The terms with an odd number of amplitudes do not contribute (see Equation (53)) whence we get the following sum of averages to order |D| 2 We will show that the last term does not contribute whence, collecting all terms, Equation (72) becomes The reason why the last term of Equation (76) does not contribute is that we cannot ignore the spacetime phase factors in this case, see Equation (58). In fact, Re Da s a * i comes from the intensity arriving at Alice, but Re(Da * s a i ) from the Bob intensity. In the former, we should include a phase exp(iφ) and in the latter exp(iχ), these phases being uncorrelated. Therefore, in the average of the last term of Equation (76) the phases give a nil contribution. In contrast, all other terms contain absolute values whence the phases disappear.
The results derived from a Equations (69) and (77), that have been obtained from the fields via our stochastic approach, reproduce the relevant result of the experiment, namely that there is a maximum positive correlation shown by the equality P AB = P A = P B , which is also predicted by the HS results Equations (61) and (62) (with a factor 1/2 with respect to the latter as explained above).
The picture of the experiment in our approach is quite different from the picture in terms of photons suggested by the HS formalism. In HS, a few photons in the (usually pulsed) laser beam are assumed to split by the interaction with the nonlinear crystal, giving two photons each. The probability of producing an entangled photon pair by the splitting within a detection time is assumed to be of order |D| 2 << 1, whence the simultaneous arrival of entangled photons at Alice and Bob happens for a small fraction of laser pulses. However, the detection of the photons conditional to the photon production, η, is assumed to occur with probability of order unity (say η 0.7). The probability η is named detection efficiency.
In our approach, the probability of photocounts by Alice or Bob does not factorize that way. Furthermore, the concept of photon does not appear at all, but there are continuous fluctuating fields including a real ZPF arriving at the detectors, which are activated when the radiation intensity is big enough.

Understanding Entanglement
The strong correlation exhibited by the comparison of Equations (69) and (77) is a consequence of the phenomenon of entanglement and it is labeled strange from a classical point of view. In our stochastic interpretation, it is due to the fact that the signal field Da * i produced in the crystal is correlated with the ZPF field a i that had entered the crystal, see Equation (64); similarly for the correlation between the signal Da * s and the ZPF field a s . That is, the strong correlation appears because the same normal modes of the radiation appear in both fields, E A and E B , that go to Alice and Bob, respectively. Now I shall stress the relevance of the vacuum fluctuations in order to understand the difference between the "classical correlation" and "entanglement". In the evaluation of the averages in Equation (76), we have taken the distribution of field amplitudes Equation (20) into account giving relations typical of a Gaussian distribution of the amplitudes. Now let us assume that we had used, instead of Equation (20) a sure (i.e., not fluctuating) distribution, e.g., δ being Dirac delta. In this case, we had obtained In this case, the result for the coincidence probability had been (or P AB = 0 if we worked to order |D| 2 ). Equation (81) would mean that there was no correlation between Alice and Bob detections. In contrast, a strong positive correlation is obtained if we take into account the fluctuations. This happens when the field is assumed Gaussian, which leads to a stronger correlation, as may be realized comparing Equation (78) with (80), the former leading to Equation (77) and the latter to (81). We conclude that the strong positive correlation associated with entanglement requires that the fluctuations are correlated. That is, the high probability of coincidence detection requires a strong positive correlation between fluctuations of the fields arriving at Alice and Bob, respectively. This leads to a physical (realistic) interpretation as follows: entanglement is a correlation between fluctuations of fields in distant places. In our example, the correlation of fluctuations involves the vacuum fields and might be labeled entanglement between a signal and the vacuum [39], see Equation (63).

The Violation of Bell Inequalities
The interpretation of SPDC experiments in terms of stochastic processes, including the vacuum fields, allows local models violating a Bell inequality. This contradicts the wide consensus that Bell´s is the unique local realistic formalism appropriate for experiments measuring correlations between distant parties. Our proof consists of exhibiting a local model for an experiment leading to predictions that violate a Bell inequality. In the construction of the model, we are free to fix the fields produced in the source, but then we should obtain the predictions using classical laws and determining the correlations as in Equations (66), (67), (70) and (71). I propose a model where the fields produced in the source correspond to vectors as follows where H and V are unit vectors in two perpendicular directions, say horizontal and vertical. Now I assume that E 1 goes to Alice, who possesses a polarization analyzer at an angle θ with the horizontal in front of her detector. Then the field arriving at the detector will have a component in the direction θ given by plus some amount of ZPF. I also assume that Bob has a polarization analyzer at an angle φ with the horizontal in front of his detector. Hence the field arriving at his detector will be plus some amount of ZPF. We also need the signal fields that would arrive at Alice and Bob detectors if the pumping laser was off, which may be easily derived from Equations (83) and (84) putting D = 0. That is We shall also define the fields carrying the signals by the difference, that is The intensities will be the squared moduli of the corresponding fields Equations (83) to (87). Hence it is easy to get the single detection probability by Alice, that is and a similar result for Bob, i.e., P B = 1 2 |D| 2 . The calculation of the coincidence probability is more involved, although still straightforward. We shall use Equation (72) that we rewrite for convenience where from Equation (85) it is trivial to obtain In order to obtain the expectation I A I B it is convenient to define partial intensities as follows I A = I A0 + I A1 + I A2 , I B = I B0 + I B1 + I B2 , and similar for I B . Then the desired expectation is I A I B = (I A0 + I A1 + I A2 )(I B0 + I B1 + I B2 ) , consisting of nine terms. One of them will be cancelled with I A0 I B0 Equation (88), the terms I A1 (I B0 + I B2 ) and (I A0 + I A2 )I B1 will not contribute taking Equation (53) into account, and I A2 I B2 is of order |D| 4 therefore negligible. Thus, to order |D| 2 we have I A I B − I A0 I B0 = I A0 I B2 + I A2 I B0 + I A1 I B1 .
In this form, it is easy to select the expectations a 2 s a * 2 s = a 2 i a * 2 i = 1 2 , a * s a s a i a * i = a * s a s a i a * i = 1 4 , whence Equation (92) gives (cos 2 θ cos 2 φ + sin 2 θ sin 2 φ) + 1 4 (cos 2 θ sin 2 φ + sin 2 θ cos 2 φ) + 1 2 (cos θ sin θ cos φ sin φ)] = |D| 2 1 4 + 1 4 (cos θ cos φ + sin θ sin φ) 2 The term I A2 I B0 leads to the same result, and the term does not contribute by the same reason as the last term of Equation (76), as commented in the paragraph after Equation (77). From Equations (88)-(93) we finally obtain The predictions of our model may violate the Bell inequality Equation (56). In fact, we may consider an experiment where Alice measures with her detector when the polarizer is put at angles θ 1 or θ 2 and, similarly, Bob at angles φ 1 or φ 2 . The predicted probability of a single count by either is 1 2 |D| 2 . The coincidence probability is given by Equation (94) and the violation of the Bell inequality is produced if the following angles are chosen in the experiment If this is inserted in Equation (56)  > |D| 2 , that violates the Bell inequality. That is, our model agrees with both the standard quantum predictions and a local realistic view of nature. We conclude that Bell inequalities are not necessary conditions for local realism, contrary to the current wisdom.

Quantum States
A big difficulty for a realistic interpretation of quantum theory is that the concepts of state and measurement have been highly idealized. This has led to the attempt of achieving a picture of the quantum world, with the (implicit) assumption that preparations and measurements are simple processes because their mathematical representation in the HS formalism are simple, that is vectors and self-adjoint operators. However, the physical processes involved are not simple.
Let us study the question of what is a quantum state. In classical physics, the concept of state is certainly simple, it rests on the concept of isolation for either a particle or a wave or any combination of them. However, it is a common view that neither the concept of a particle nor the concept of a wave may be transferred to quantum physics. Thus, the standard answer to the question whether the electron is a particle or a wave is neither.
The answer involves a contradiction: anything is either localized (particle) or extended (wave), of course with respect to some reference size, say for an electron compared with an atom. I believe that a more correct answer is that the electron is both. In fact, an electron cannot be seen as an isolated point particle. The physical electron corresponds to a cloud of interacting electrons and positrons, electromagnetic radiation and other fields with a mass m and net charge e. The cloud may have a size possibly as large as the Compton wavelength.
This statement may be put in a different form as follows. The vacuum consists of a set of real fluctuating fields that are modified by the presence of an electron. In this paper, we claim that the fields are real, in contrast with the common opinion that they are virtual (I believe that virtual is a word without any clear meaning that is used in order to avoid commitment with either the assertion that the fields are real or they are not). In summary, in contrast with the classical domain particles like electrons, they cannot be seen as having states defined in a manner as simple as in classical mechanics.
We may ask what is the physical interpretation of the state in a more complex system like an atom. It is not just a system of Z + 1 point (or small) particles, that is the nucleus plus Z electrons. In the study of the atom, the nucleus might perhaps be treated as a particle localized in a region far smaller than the atom, but this is not the case for the electrons. What exists is a large number of electrons and positrons that are created (maybe with emission of radiation) or annihilated (with absorption of radiation) in pairs, with a conservation of the total electric charge, that is Ze. Many other quantum fields are likely involved that may correspond to modifications of the vacuum. In summary, I believe that the quantum state of any physical system is a quite complex structure consisting of many interacting fields evolving in time.
Sometimes it is argued that in a nonrelativistic treatment, the possible creation or annihilation of electron-positron pairs should not be taken into account because the energies required are far larger than typical atomic energies. However, the argument is flawed. In classical electrodynamics, the total mass-energy of, say, two electrons plus a positron at extremely small distances may not be greater than the mass of a single electron due to a possibly strong electrostatic negative energy of interaction. In summary, the internal structure of quantum systems like atoms should always be treated taking many (relativistic) quantum fields of the vacuum into account. Of course, this is actually accepted by most people when it is recognized that in renormalization calculations, the bare mass or charge are quite different from the physical ones. The simple change from bare to physical quantities abridges a complicated phenomenon, but the quantum formalism has the virtue that quite complex structures like atoms may be treated using simple equations like Schrödinger´s. That equation is just a (fairly good) approximation. In this respect, my view is quite different from the common one. I do not believe that quantum equations are exact when we ignore the interaction with the vacuum fields and corrections appear when the interaction is switched on. Interactions with vacuum fields are not small corrections, they are precisely the cause of the difference between classical and quantum physics. Indeed, classical physics is obtained from quantum physics when h → 0, but Planck constant h is just the parameter that fixes the scale of the vacuum fields (see Section 3), so that putting h = 0 means ignoring the vacuum fields.
It is remarkable that quantum theory may be formulated using simple mathematical objects (i.e., vectors and operators in a Hilbert space) and relations between them in order to describe very complex phenomena.

Measurements
Measurements have been still more idealized than states in standard books or papers on quantum mechanics. My opinions on this subject fully agree with Einstein's. I quote him: "You must appreciate that observation is a very complicated process. The phenomenon under observation produces certain events in our measuring apparatus. As a result, further processes take place in the apparatus, which eventualIy and by complicated paths produce sense impressions and help us to fix the effects in our consciousness. Along this whole path -from the phenomenon to its fixation in our consciousness-we must be able to tell how nature functions, must know the natural laws at least in practical terms, before we can claim to have observed anything at all. Only theory, that is, knowledge of natural laws, enables us to deduce the underlying phenomena from our sense impressions. When we claim that we can observe something new, we ought really to be saying that, although we are about to formulate new natural laws that do not agree with the old ones, we nevertheless assume that the existing laws -covering the whole path from the phenomenon to our consciousness-function in such a way that we can rely upon them and hence speak of observations" [8].
In summary, a realistic interpretation of quantum theory cannot be achieved attempting to interpret directly the (Hilbert space) formalism. That formalism is a simple, although extremely efficient, algorithm in order to calculate relevant predictions for the results of experiments. In some cases, alternative formalisms may be better in order to get a physical picture of phenomena, even if they are less efficient for calculations. In particular, for the radiation field, the Weyl-Wigner formalism is superior than Hilbert space in this respect.