Local Scale Covariance and Its Radical Implications for Cosmology

: Local scale covariance posits that no privileged length scales should appear in the fundamental equations of local, Minkowskian physics—why should nature have scale, but not position preferences?—yet, they clearly do. A resolution is proposed wherein scale covariance is promoted to the status of Poincaré covariance, and privileged scales emerge as a result of ‘scale clustering’, similarly to the way privileged positions emerge in a translation covariant theory. The implied ability of particles to ‘move in scale’ has recently been shown by the author to offer a possible elegant solution to the missing matter problem. For cosmology, the implications are: (a) a novel component of the cosmological redshift, due to scale-motion over cosmological times; (b) a radically different scenario for the early universe, during which the conditions for such scale clustering are absent. The former is quantitatively analyzed, resulting in a unique cosmological model, empirically coinciding with standard Einstein–de-Sitter cosmology, only in some non-physical limit. The latter implication is qualitatively discussed as part of a critique of the conceptual foundations of Λ CDM which ignores scale covariance altogether.


Introduction
For over a millennium, astronomers agreed on an observationally sound cosmological model which was founded on the premise that humans are at the center of the universe. They have since become much more modest with the inclusion of translation covariance as a fundamental principle in physics, but without treating scale covariance on equal footing, the primitive sin of anthropocentrism will not be fully atoned. Indeed, physics currently suffers from a manifest anthropocentric bias wherein the very small and very large are described using completely different, incompatible languages (quantum vs. classical), with 'small' and 'large' being defined relative to the scale of some (not particularly impressive) biological species.
Scale covariance is just as natural a symmetry as translation covariance: a fundamental description of nature should not include a privileged length scale, just like it should not include a privileged position; Both ought to appear as attributes of specific solutions rather than of the equations. However, this is not only a philosophically appealing principle. Speculating from astronomical measurements, what is happening elsewhere in the vast cosmos is pointless without the assumption of translation covariance. Equally pointless is any attempt at extrapolating terrestrial physics to scales much larger or smaller than our native scale, unless our description of nature is scale-covariant. Take Maxwell's equations in vacuum as an example. All Maxwell knew was certain relations between the electric and magnetic fields, experimentally deduced by Faraday on scales of a few centimeters. Extrapolating those to the scale at which light is allegedly propagating was sound only because Maxwell's equations are scale-free. This, however, is not the case with all physical theories ever proposed to describe matter: Schrödinger's equation, Dirac's, and the Standard Model are not scale-covariant in any reasonable sense of the term, let alone the sense in which

Scale Covariance and Its Conflict with the Equivalence Principle
Scale covariance is a symmetry of a physical theory in Minkowski's spacetime (not to be confused with Weyl invariance). For concreteness, classical electrodynamics (CE) will serve as an archetype of a scale-covariant physical theory (there are good reasons to believe that this choice is more than a toy model; see Section 4). By CE, we mean a concrete realization of its two obvious pillars, referred to as the basic tenets of CE, which are: Maxwell's equations in the presence of a conserved 1 source due to all particles (labeled by a) with F µν = ∂ µ A ν − ∂ ν A µ the antisymmetric Faraday tensor, and local 'Lorentz force equation' with T (a) the symmetric 'matter' e-m tensor associated with particle a. Defining the canonical tensor we obtain from (1) and (4) Poynting's theorem Summing (3) over a and adding to (5), we obtain the local energy-momentum (e-m) conservation P := Θ + ∑ a T (a) ⇒ ∂ ν P νµ = 0 , (6) and, purely by the symmetry and conservation of P νµ , also generalized angular momentum conservation A concrete realization of the basic tenets of CE is scale-covariant if the following is a symmetry 2 for any λ > 0 with any parameter of the realization unchanged. The exponent of λ is referred to as the scaling dimension of a density which, by definition, is 0 for the parameters of the realization. For λ = −1, (8) is just the CPT transformation. Extending (8) to all λ = 0 therefore combines the two symmetries. When shifting to a different scale, the electric charge of a particle, whether elementary or composite, does not change by virtue of scale invariance of electric charge d 3 x j 0 . In contrast, the scaling dimension of the particle's magnetic dipole moment µ ı = 1 2 d 3 x ık x  j k is 1 and hence scale-dependent. If, further, the particle is sufficiently isolated then, since the electromagnetic field in its neighborhood is dominated by its electric current, one can associate the global e-m tensor P (6) in that neighborhood with the particle (or particles in the case of a composite), referring to it as P (a) . The particle's self-energy (or mass), d 3 x P (a) 00 , incorporating also the electromagnetic self-energy (which must be a finite quantity in a well-defined CE), therefore has scaling dimension −1, while its three-dimensional angular momentum, J ı = d 3 x ık x  P (a) 0k , is scale-invariant. Note that these quantities' scaling dimensions coincide with their familiar length dimensions, thus giving meaning to the latter which is independent of the concept of a 'system of units'.

The Zero Point Field and Broken Scale Covariance
The reason why most physicists do not consider the lack of scale covariance a problem is, probably, that nature does appear to single out privileged scales. All Hydrogen atoms around us, for example, have a common Bohr radius. However, all such atoms also have the same position (on cosmological scales), and this does not imply translation covariance is not a symmetry of the equations describing Hydrogen atoms. It does mean, nonetheless, that for scale covariance to be compatible with observations, some dynamical process must exist in the universe, causing all Hydrogen atoms to 'cluster in scale', similarly to the way gravity induces spatial clustering.
Continuing the above analogy, for matter to cluster in space, its position must be able to change. Likewise, for it to cluster in scale, its scale, viz., λ in (8), must be able to change. As such a 'motion in scale' must still respect local e-m conservation, and given the scaling dimension, −1, of self-energy, for a particle moving towards larger scales, there must be a net imbalance between its advanced and retarded self-fields, favoring the latter. This is a direct consequence of applying Stokes theorem to e-m conservation (6) (see, e.g., Equation (100) in [2]). Likewise, for a particle moving towards smaller scales, the imbalance must favor its advanced field 3 . In [2] (Section 3.3), it is shown why such a time-dependent, particle-specific 'selection' of radiation field is not only consistent with the basic tenets, but even appears to be mandated by them. Note the contrast to some Brans-Dicke-type or Machian theories [10][11][12], which can also be interpreted as implying 'motion in scale' of some sort. Those introduce a new 'cosmic field', having no convincing observational justification, which uniformly scales the masses of all particles (equivalently, the gravitational constant)-parameters of a scale non-covariant theory.
The total electromagnetic field at a point x in empty spacetime, containing both advanced and retarded components of such scale-moving particles lying on the lightcone of x, will be referred to as the zero-point-field (ZPF) at x, a name borrowed from Stochastic Electrodynamics (SED) [13], although it does not represent the very same entity. In particular, our ZPF is not some global homogeneous solution of Maxwell's equations. Instead, it is a byproduct of different particles exchanging e-m in the spirit of action-at-adistance electrodynamics [14,15].
Scale clustering and the nature of the ZPF are both discussed in greater detail in [9], but for the purpose of the current paper, the following natural assumption would suffice: All stable particles in the universe are in some state of (mutual) 'energetic equilibrium' such that their time-averaged self-energy is constant-in the usual empirical sense, to be made more precise in Section 3.1. This entails the appearance of parameters of nonzero scaling dimensions in the equations of physics, which are the exact counterparts of some fixed coordinate value, x 0 , appearing in the equations of a 'spontaneously broken' translation covariant equation, viz., in an effective description incorporating the specific solution responsible for x 0 (such as the c.o.m. of the earth-moon system in the equations describing the tide). Furthermore, just like the result of any calculation associated with a measurement, it must not depend on the value of x 0 nor on the values of such scale non-scalars. Constants which are scale-scalars, on the other hand, are the counterparts of fixed coefficients, defining the equations.
Notable scale scalars are c,h, and e, but different triplets define empirically distinct physics only if their respective α ≡ e 2 /(hc) are distinct. Besides particle masses, of scaling dimensions −1, the relevant non-scalars arise in linearized gravity. Now, gravity in general and, strictly speaking, also linearized gravity, is by no means a Minkowskian theory, on equal footing with CE. However, for the purpose of the current paper, the two can be partially merged by writing CE in a generally covariant form and retaining only the leading order in the metric perturbation (of the Minkowskian metric). The standard result entails the appearance of two non-scalars in linearized Einstein's equations, seen as a spin-2 Minkowskian theory: G P and Λ P , of scaling dimension 2 and −2, respectively (the meaning of 'P', standing for 'Proper', will transpire in the sequel), when the metric perturbation is treated as a scale-scalar. We shall see that those will indeed not appear in the final result of our calculations.

The Slim Equivalence Principle
Using CE to express matter in cosmology requires first that it be consistently fused with GR. This is done by expressing flat spacetime CE in an arbitrary coordinate system via the use of a 'metric' g µν . The resulting equations are supplemented by Einstein's field equations (see [9] for this different take on general covariance) with P the generally covariant CE e-m tensor, and R the expression for the Ricci tensor in terms of the metric, g µν (signature −2), and its derivatives: Note that G and Λ in (9) are just a couple of (dimensionless) numbers (in the case of ECD, G can be set to 1 without loss of generality, as it can be absorbed into rescaled ECD fields and constants). In contrast, their Minkowsikian counterparts, G P and Λ P , being scale non-scalars, cannot have definite values, and their values in a given system of units are to be read from a CE solution describing a gravitating system and the relevant gauges.
The basic tenets (1)-(3) become their obvious generally covariant extensions. In particular, by the antisymmetry of F, Maxwell's equations simplify to while covariant e-m conservation reads with g := det g µν . From (10)(a) and the antisymmetry of F µν , one obtains ∂ µ √ gj µ = 0 as a consistency condition, generalizing (2). Using the same construction as in appendix D of [2], one can then show that if a coordinate system exists for which g µν is slowly varying over the extent of the particle, then (11) implies that the path of the 'center of the particle', γ µ (s) (given a clear meaning there), is described by the geodesic equation with 'dot' standing for the derivative with respect to any parametrization, s, of γ. From (12), we see thatγ 2 = const along γ, from which follows ds ∝ dγ 2 , the proper-time differential. Einstein's equivalence principle states that, around any spacetime point, x, a local coordinate system, y, exists for which g µν | x = η µν and ∂ α g µν | x = 0. The geodesic equation at x, expressed in the y coordinates, then has the formγ µ = 0. It follows that, to the extent the above special form of the geodesic equation implies free fall, and insofar as the second derivatives of the metric are negligible, the Minkowskian form of CE locally holds true in a freely falling frame. We shall dub this simple consequence of the tensorial nature of g µν and its Minkowskian signature the slim equivalence principle.
However, Einstein went a step beyond the slim equivalence principle. Mesmerized by the mathematical similarity between GR and Riemannian geometry, g µν , or rather the coordinate-invariant interval ds := g µν dx µ dx ν 1/2 is endowed with a selective metrical role, involving space/time measurements 4 . This addition conflicts with CE's scale covariance, which prohibits assigning a priori metrical meaning to coordinate intervals. We are therefore forced to use directly the definition of a measurement: The result of any measurement is some dimensionless number extracted solely from the e-m tensor. For example, a standard length gauge is represented by some compact region in space, occupied by a relatively higher energy density. The length of an elongated object in standard length units, likewise occupying some compact region in space, is just the number of standard gauges exactly fitting the object and similarly with the object's other physical attributes, such as mass, luminosity, etc. We shall be very explicit about this procedure in Section 3.1.

The Friedmann Model for a Scale-Covariant-CE Universe
The overall framework used in this section is the so-called Friedmann model, i.e., we seek a solution of EFE (9) for some fictitious metricg representing a maximally symmetric space (we have restricted ourselves for the time being to the flat space case, k = 0). On the r.h.s. of (13) is a similarly fictitious, x ı -independent e-m tensorP which, in turn, is assumed to represent the coarse-grained matter distributioñ with K a normalized kernel. The time x 0 in (14), denoted also by τ, is known as the conformal time, and the reader is warned to keep that in mind, as the standard equations of cosmology will take a somewhat unusual form. The 'tilde pseudo tensors'g andP are therefore involved in dynamical changes on cosmological time scales, while the fluctuations around them are only relevant on much smaller scales. The three-tensorP ı must be invariant under rotations so as to respect the isotropy of space. For the same reason, the vector,P 0ı =P ı0 , must vanish, leaving us with with ρ and p arbitrary functions of (conformal) time alone. Note that both are merely components of the e-m tensor at this stage, whose relation to concrete observations will have to await Section 3.2-a complication entailed by the slimness of our equivalence principle. The pseudo tensorP incorporates two distinct contributions: electromagnetic, due to the ZPF, and one from matter, i.e., from regions of non-vanishing mechanical e-m T. In the case of the ZPF, the tracelessness of Θ µν and Θ 00 ≥ 0 necessitate Inside matter itself, we have also a contribution from T, spoiling the tracelessness of P. For non-relativistic matter, the usual form shall be used but it is instructive to re-derive (18) in our unusual context: Multiplying e-m conservation (6) by y ı and integrating by parts over a region in three-space, we get By pulling ∂ 0 out of the integral on the r.h.s. of (19), 'equation-of-state' (18) follows when P 0ı is the momentum density of a collection of non-relativistic, 'on-shell', and (at most) locally interacting particles, in which case a particle's momentum is just its mass times its c.o.m.'s velocity. The r.h.s. of (19) is then just twice the kinetic energy of all particles (and always less than ρ matter ). In the early universe-we shall later argue-this is no longer the case. The ρ and p in matter-dominated regions must then be calculated separately for the electromagnetic component of P (a) , and for T (a) , with a result which, in general, could be different from (18). Given the large uncertainty in the amount of matter in the universe, it is even conceivable that some such out-of-scale-equilibrium/off-shell matter is also present in the current epoch of the universe, possibly rendering a cosmological constant redundant. This appealing possibility, however, will not be explored in the current paper.
A mixture of ZPF and ordinary, non-relativistic matter, with a ratio := ρ matter /ρ ZPF ≥ 0, is therefore represented by an 'equation-of-state' forP (16) We shall later see that must be non-trivially time-dependent. To leading order in the fluctuations, covariant e-m conservation (11) implies the same equation forP:g −1/2 ∂ µ g 1/2Pµν +Γ ν µλP µλ = 0 withg := detg µν = a 4 andΓ the Christoffel symbol derived from the metric (14). This implies d dτ in general, and in particular for ρ mix &p mix (but not individually for ZPF or matter components thereof). If we wish to include electromagnetic contribution, ρ rad , of origin other than the ZPF, which is not interacting (significantly) with matter (coming from, say, the CMB, whose associated electromagnetic field can be assumed to superpose incoherently with that of the ZPF), then its equation-of-state, (17), and (21) readily integrates 5 to ρ rad ∝ a −6 . Finally, Equation (13) forg µν provides the first Friedmann equation 6 for a(τ) with subscript 'A' standing for current (Astronomer's) values.

The Cosmological Redshift
Matter in the universe, even if contributing negligibly to the coarse-grained e-mP, is indispensable in two complementary senses. First, without matter, there are no astronomical objects and no equipment to observe them-the latter mandated by the slimness of our equivalence principle. Second, without matter, there is no ZPF; from the discussion in Section 2.1, it follows that matter and the ZPF are just different facets of the same physical entity.
We shall represent (the centers of) particles in the universe by a collection of worldlines, γ(s), which is compatible with the time-independent homogeneity ofP. Such world-lines must be those of comoving particles, viz., have the form γ ı = const, so as to respect the above compatibility condition at any time. By virtue of Γ ı 00 derived from the metric (14) vanishing, those are indeed the world-lines of 'freely falling' particles. Mach's vague principle is thereby given a concrete meaning, as the world-line of a fixed spatial coordinates triplet, belonging to a local frame which is rotating relative to the local comoving frame, will no longer solve the geodesic Equation (12).
Next, we wish to investigate the observational consequences of a gradual change in the intensity of the ZPF over cosmological time scales-a consequence of (21) and (22). The main challenge we face is in the need to give meaning to a comparison of properties of matter at distinct conformal times, without resorting to 'universal gauges' at every point in the universe, manifested in the (full) equivalence principle. Instrumental to our analysis is the following postulate of scale equilibrium: For any given particle species, elementary or composite, there exists a fixed ratio between the (average) energy density of the particle at x and ρ ZPF (x 0 ). By the symmetry of a Friedman model and the assumption of scale-clustering (Section 2.1), within each constant-τ slice, all such particles should have a common density. However, that the above ratio is τ-independent, viz., that the 'average vacuum energy density' sets the standard for energy density 7 is more subtle, and although it can be motivated further, we only expect it to emerge as an approximation in a rigorous analysis, incorporating also the fluctuations in the density of matter and its phase transitions over cosmological time scales.
As a first example, we analyze a primary observable in cosmology, known as the luminosity distance of an isotropically radiating astronomical object, Above, F P , referred to as the proper flux, is the measured energy-flux in terms of some energy-flux standard, as determined by an astronomer with (fixed) coordinates x A (the spatial part x A ) at conformal-time x 0 A ≡ τ A , and L is the object's total power, or luminosity, as would have been determined by the astronomer, had the remote astronomical object been 'teleported' to earth (or the astronomer to the object) from its point in spacetime (τ S , x S ) ('S' for source/star/supernova...) with the retarded conformal-time, Defining the proper distance, d P (x, x , τ), between two points at a given conformaltime as the minimal number of local length gauges exactly fitting between them, the homogeneity of space implies with α depending on the choice of standard length gauge and on time. Since null geodesics satisfy dτ = dr, a wave of sufficiently high-frequency, k 0 , viz., k 0 ȧ/a, retains its wavelength in the x coordinates. Given the definition of the redshift, z, (z + 1) = wavelength measured by astronomer at x A wavelength measured near source at x S , Equation (25) then implies that a standard length gauge, when teleported to an earlier conformal time, measures a larger coordinates interval by a factor (z + 1), i.e., or, equivalently, The proper-flux of a distant object can be computed either classically, via Poynting's theorem, or by resorting to the language of 'photons', with identical results (that the concept of a photon is compatible with a CE ontology is shown in [16] and in greater detail in [9]). The former, somewhat longer derivation, appears in Appendix A. In the case of photons, (26) leads to (This standard expression can be found in virtually any textbook on GR, but there, the proper-distance derived from the metric is being used). The first term in (29) is just the luminosity, L (as determined, e.g., by an astronomer near x S ) divided by the surface area of a sphere with proper radius d P (x A , x S , τ A ), over which the emitted photons are distributed.
The second term involves our slim equivalence principle, namely, the assumption that the proportionality constant relating the measured energy and frequency of a photon, both having scaling dimension −1, is the same at x A and x S , hence one power of (z + 1) −1 , and that the rate at which photons penetrate the sphere of radius r, on which earth resides, is diminished by another such factor 8 . Plugging (29) in (23) gives Expression (30), coinciding with that of standard cosmology, implies other commonalities: Using (27), the observed angular diameter of a sphere with proper diameter D is the standard with d A the so-called angular diameter distance. Using (30), the sphere's surface brightness, F /(δθ) 2 , has the usual ∝ (z + 1) −4 dependence. Both relations apply also in the case of To test our cosmological model against observations, we still need to express r in (30) as a function of z or, equivalently, z as a function of τ. To this end, we change coordinates to the local Minkowskian frame at both x S and x A , implyingg µν ≡ a 2 η µν → η µν there. In these new coordinates, the previously identical coordinate extent of a single wave cycle transforms as χ → aχ, and the value of the ZPF density as ρ ZPF → a 2 ρ ZPF . We thus arrive at two (locally) Minkowskians frames, with length gauges whose representations are related by a scale transformation (8) for some λ. Our postulate of scale equilibrium, combined with the scaling dimension of energy density, −4, readily gives the ratio between the local coordinate extents, , of the length gauges Combined with the transformation of χ, the definition of the redshift, (26), gives To make contact with standard cosmological terminology, we take the derivative of (27) with respect to τ S at τ S = τ A . The derivative of z (at z = 0) is computed using (32), The (locally) exponential expansion implied by (33) (even for a constant H * ) is misleading due to the non-physical nature of the conformal time τ. To see its physical content, we proceed as follows. Denoting by (τ) ≡ α −1 (τ) the coordinate interval spanned by a comoving standard length-gauge, d P (x, x , τ) (τ) = const and (33) imply the shrinkagė By our slim equivalence principle, this must also be true for the conformal-time interval between two consecutive ticks of a comoving physical clock and, in particular, of a 'light clock'-two parallel mirrors, separated by a single standard length gauge, with light ray bouncing in between (this choice respects our implicit c = 1 choice). It can then be easily shown that the growth rate of d P with respect to the time, t * , shown by a comoving light-clock (our counterpart of the cosmological time in standard cosmology), satisfies Choosing τ i ≡ τ(t * 0 ) = τ A , (25) mandates ω = α −1 A . Setting τ(t * ) = τ S in (35) and dividing by d P (x, x , τ S ), (27) then implies that, in our model, corresponding to the usual z-dependent Hubble parameter, viz.,ȧ/a 2 (in conformal time), is The extra (z + 1) above, reflecting the different time standards of A and S, could have been deduced also from (25) and (28). Combining with (32) and (35) then gives an important relation dz which, with (30), gives with H 0 the locally inferred, dimensionful Hubble's constant. Equation (35) can be naively interpreted as either an expansion of the universe, a collective shrinkage of matter, or both-neither will be truly adequate.
To fully define a cosmological model, we still need to specify a third equation, complementing (21) and (22) in determining the three unknown functions of τ: a, ρ, . It should be clear that the ratio is a quantity which, in principle, can be derived from the underlying physics. Specifically, insofar as our postulate of scale equilibrium holds true, each particle can be seen as a standard length gauge undergoing a shrinkage (34). From our postulate of scale equilibrium, the effect this has on is deduced by taking a large coordinate volume and calculating the resulting differential change in the part of that volume taken by matter, with the obvious result˙ Cosmic scale phase transitions of matter would only have a marginal influence on the validity of (40), but in the early universe (40) is completely invalid; see Section 4.
Using (21) for the mixture {ρ mix , p mix } (20), H * (33) can also be written Eliminating˙ between (40) and (41) gives Next, we extend our model to curved spaces, using the metric where k is any non-vanishing real. By virtue of the spatial part also describing a homogeneous space, the previous flat space results can essentially be copy-pasted, with only three differences. First, x − x in (25) applies only in the local Cartesian coordinates, where S k (r) effectively equals r; the proper distance then becomes the number of standard length gauges exactly fitting along a geodesic curve connecting x and x at τ, making this number an extremum with respect to small deformations of the curve. Second, the r.h.s. of (22) receives a term −ka 2 . Third, in the expression of (30), for the luminosity distance, r → S k (r).
To arrive at these results, one only needs to decompose the previous teleportation operation into two commuting stages: spatial, using the three metric, and the previous, temporal stage, implementing some local scale transformation by a factor (z + 1).
As a concrete example for the above procedure, consider the observational signature of a standard ruler. A transverse comoving ruler of length is defined as two mutually close comoving points of equal r, measured by a local observer to have a proper separation equal to (z + 1) −1 . To compute its observed angular size, δθ, we decompose its teleportation to its location in spacetime, (−r rlr , r rlr , θ, φ), into two commuting stages. In the first, the three-metric in the observer's τ = 0 hyper-surface is used to move the ruler from the origin to (0, r rlr , θ, φ). Making standard use of the spatial part of the metric (43), the angular separation of the ruler's end-points is calculated to be δθ = /S k (r rlr ). Then, temporally teleporting the ruler to the retarded conformal time increases δθ by a factor (z + 1) which is dropped out by the definition of a comoving standard ruler. As null geodesics in our coordinates x satisfy dτ = dr, δθ is clearly the measured angular size of the ruler. With (31) we then get δθ = (z , which is the expression for the angular size also in standard cosmology. Standard rulers 'pointing at us' have their observational signature in the redshift difference between their end points. Repeating the above procedure, the spatial teleportation stage gives a coordinate separation δr = . After temporally teleporting to the retarded time, (38) readily gives a redshift difference δz = −Hδτ = Hδr = H , again, formally coinciding with standard cosmology.
Finally, we compare the differential proper-time progression, viz., that of t * , as a function of the redshift z, with the corresponding progression of the cosmological time, t, in standard cosmology. Using (38), and (28) (equivalently (27)), gives as in standard cosmology.

Comparison with Observations
The slimness of our equivalence principle necessitates prudence in associating the dynamical variables of the model with observations. We have already seen that to convert a coordinate difference to proper distance, one must divide the former by the coordinate extent, α −1 , of a standard length gauge. Similarly, to convert ρ matter to the corresponding measurement, one must divide it by the numerical value of the energy-density associated with a standard energy-density gauge, expressed in local x-coordinates-call it β −1 . Now, the postulate of scale equilibrium tells us that β −1 ∝ ρ ZPF . It then follows from (40) that the proper (viz., measured) density behaves as ρ P matter ≡ β ρ ZPF ∝ (z + 1) 3 , which is the standard, purely geometrical expression for (non-relativistic) matter density. Similarly, (32) gives the standard ρ P rad ∝ (z + 1) 4 . The fluctuations, p µν := P µν −P µν , representing objects in the universe, must also be divided by β −1 before being compared with the fluctuations at a different τ = const surface. Using the transformation rule of standard gauges (as defined at the end of Section 2), it can then easily be shown that the measured ("Proper") values of G P and Λ P (and of the speed-of-light c P ) as deduced by a local observer from linearized/Newtonian gravity, are teleportation invariant, viz., just some numbers which depend on the choice of standard gauges being teleported.
The above scheme for coping with scale covariance is sufficient if one is interested in the reports of future and past generations of astronomers relative to ours, but it cannot provide the proportionality constants, α and β, without explicitly representing the gauges as CE solutions. This fact restricts the reports of present-day astronomers using our model to dimensionless quantities only which, by their nature, do not depend on such proportionality constants. Whether knowingly or not, this is a restriction cosmologists tend to take upon themselves anyhow.
As the ultimate result must be dimensionless, rather than expressing it as a combination of independent dimensionful quantities, one may as well do the same with a maximal set of independent dimensionless quantities. Besides experiment-specific dimensionless quantities, such as angles, redshifts, or luminosity ratios, there are only five such independent quantities: A (where A := (τ A ), etc.); the baryonic and radiation 'density parameters', respectively, where ρ bar ≡ ρmatter in our dark-matter-free model. Note indeed the absence of α A and β A . The remaining two dimensionless parameters could be k and Λ, but a more conventional choice would be: with r H := H −1 0 c P the current Hubble distance. The baryonic density parameter, ω b A is directly determined from the 'count' of baryonic matter. The figure published by the Planck collaboration, ω b A = 0.0486 ± 10 −3 , is consistent with the direct count. Nonetheless, given that just a couple of years ago, the direct count gave only half this value, the Planck figure should better be viewed as a lower bound, and with a much larger uncertainty. The radiation parameter is currently negligible, Our related dark-matter model ( [9] and Section 4) must be further advanced in order to provide independent stringent bounds on A but those will most likely be on the order of 1. As for κ A , all model-independent bounds on its value are consistent with κ A = 0. Finally, as the Hubble distance is so much larger than the size of galaxies, any value of LA which is not absurdly large is compatible with observations involving local gravitation.
Moving to actual observations, there are two primary ways of determining the relative values of the luminosity distance, d L , at different redshifts (or equivalently, by (31), of d A or S k ), using either standard candles, primarily type Ia supernovae, or the so-called transverse BAO standard ruler [17]. Comparing the relative values of the Hubble distance, d H := H −1 , makes use of the line-of-sight BAO ruler. Note that the 'ruler's length', , and the 'candle's luminosity', L, do not appear in such relative distances.
To compute our model's d H (z) ≡ H −1 (z) and d L (z) ≡ α A (z + 1)S k (r), one propagates backwards in τ (from τ A = 0) the three dynamical variables, a, , and ρ, via the system (21), (22) and (40), with a A , ρ A , and A as initial conditions, arriving via (32) and (42) at z(τ) and H(τ), respectively, which is used to express r ≡ −τ in terms of z. For astronomers to currently see a redshift (rather than blueshift), a(τ) must be a monotonically increasing function in the current epoch of the universe, further taken to be positive. Like in standard cosmology, the map from our model's seven 9 variables and constants to the five (current, dimensionless) observable parameters, implies a two-dimensional parameter degeneracy.
The above coincidence certainly does not imply that the principle of scale covariance is invalid in a negligible-ZPF universe. It does explain, nonetheless, why complete disregard for such a fundamental principle-assumed, of course, to be respected by nature-could have resulted in a cosmological model which is not in severe conflict with observations. It appears, then, that consistency with observations requires A 1 but, remarkably, our model comes close to reproducing the ΛCDM curves already at A ∼ 2 (Figure 1). This relatively small value of A is further mandated by our associated dark-matter model, which is briefly discussed in Section 4 next. In the 1 regime, spacetime becomes effectively Minkowskian on large enough scales. To wit, since in this regime ρ ZPF ∼ ρ mix ∝ a −6 , we get (z + 1) = 1 in (32) and a vanishing H * (42), implying the vanishing of H 0 . This regime is relevant to the far future where indeed vanishes asymptotically, but the de-Sitter (future conformal-) singularity implies the divergence ofȧ/a in (42), and the vanishing of necessarily comes with the divergence of α. A nearly balanced battle of infinities ends with an eternally increasing-in terms of proper-time (36)-unbounded H 0 , viz., a 'big-rip' scenario. Note, nonetheless, that once all standard gauges are ripped apart, teleportation becomes ill-defined; hence, the predictions of our model, as is, are meaningless (as in the case of the very early universe which is discussed below).

Discussion: Towards a Full-Fledged Scale-Covariant Cosmology
Why challenge the concordance model? Despite being experimentally rather sound, ΛCDM rests on shaky conceptual foundations. Moreover, recent precise measurements have begun revealing 'cracks' in its alleged experimental integrity, with only epicyclic repairs in sight. In this section, we elaborate on its shortcomings, showing how a full-fledged, scale-covariant cosmological model would be conceptually much sounder. In future publications, we intend to propose natural-to-the-model fixes to the said experimental cracks. As scale covariance is a symmetry of (the representation of) matter, the discussion below would necessarily spill into particle physics and its shaky conceptual foundations which, in turn, must involve QM's. This brings us back to the point made in the introduction, regarding the anthropocentric scale bias of current physics as a whole.
• The Standard Model of particle physics on which ΛCDM is based is not scale-covariant.
One can either reject the principle of scale covariance (few would) or else declare SM, hence also ΛCDM, fundamentally wrong. There is no middle way. On the other hand, making use of scale-covariant CE to represent matter might (rightfully) appear farfetched. Nevertheless, there are good reasons to believe otherwise. First, as explained in great detail in [2], CE means different things to different people. If point-charges CE is meant, then CE is simply ill-defined due to the classical self-force problem. If it is some concrete, well-defined realization of the basic tenets, then there is currently only one such proposal (to the author's best knowledge) dubbed extended charge dynamics (ECD) (first appearing in [3] and then fine-tuned and related to the classical self-force problem in [2]), and that realization is indeed scale-covariant in the sense of symmetry (8). Second, as can be expected, a formally conserved Noether current associated with scale covariance exists for ECD, implying, among others, that the scale/mass of a particle is conserved, which conflicts with our assumption that particles can 'move in scale'. This assumption is not some model-specific curiosity but rather a generic feature of any scale-covariant theory in which particles acquire their observed mass dynamically. Remarkably, the associated ECD charge can 'leak' into singularities around which ECD charges are centered (appendix of [2]). This is a peculiarity not shared by the (integrable and conserved) ECD e-m tensor or electric current, both similarly diverging.
There are further reasons to not outright dismiss CE in its ECD realization, on the grounds of being an unrealistic representation of matter. In the cases of a single relativistic charge and multiple interacting non-relativistic charges, ECD has been shown [16] to be a plausible ontology underlying the statistical description of Dirac's and Schrödinger's equations, respectively. There are also positive indications [9] that ECD may be the ontology in the subatomic domain, and that 'scale transitions' are routinely observed in the associated experiments (but evidently not interpreted as such). At the very least, ECD is a proof-of-concept for the embedding of (scale non-covariant) particle physics in a scale-covariant framework. • The representation of matter in ΛCDM mixes incompatible classical and quantum notions.
Our coarse-grained 'tilde tensors',P µν andg µν , appearing in (13), are both finite classical quantities. In contrast, the fundamental representation of matter in ΛCDM uses QM (QFT). To get a similar categorial equality between the right and left sides of EFE, one takes the expectation value of the quantum field associated with the e-m tensor, with respect to some 'universal wave-function', |ψ , allegedly encoding the matter content of the universe. Now, in standard QM, the expectation value of an operator represents certain repeated measurements associated with it, averaged over an ensemble represented by the wave-function. As there is only a single universe, it is therefore unclear what that expectation value stands for and how to treat the fluctuations around it, let alone who is doing the measurement. What is worse is that even the 'shut-up-and-calculate' approach cannot be implemented in this case, for two reasons: First, the usual infinity subtraction schemes do not work even in an empty universe, viz., |ψ = |0 ; the VEV of the energy density is both highly ambiguous and manifestly incompatible with cosmological observations [18] (The situation in SED is no better-the SED ZPF is a meaningless infinity. Indeed, the vacuum e-m tensor must be proportional to η µν -the only Lorentz invariant tensor-and yet have a vanishing trace-a contradiction). Second, SM is the result of fitting twenty or so parameters 10 against data streaming from particle accelerators, such as scattering probabilities, life-times, etc., which has not proven itself yet outside this narrow context. The first time it was confronted with a phenomenon which could not have been tested in accelerators-the so-called neutrino oscillation phenomenon-it completely collapsed. Moreover, even in that narrow context, the very notion of a quantum field is not even necessary (as it is in classical field theories). SM can be formulated entirely as an algebraic structure involving Feynman diagrams or path-integrals 11 . • ΛCDM relies on the existence of dark-matter which hitherto evaded direct detection.
Of the two leading contenders in the missing matter challenge-dark-matter and modified gravity-the former is a clear favorite. The l.h.s. of Einstein's equations (the "marble wing") is the simplest nontrivial equation for g µν , fully consistent and experimentally well-tested (albeit in a limited domain), whereas the r.h.s. (the "cheap wood wing") is a mess-see the previous bullet. However, dark-matter theorists make no attempt to rebuild the r.h.s. from marble, sticking to the familiar, phenomenological role of the e-m tensor, endlessly adjusting its properties to conform with new observations. In contrast, ECD's 'marblization' of the r.h.s. results in the e-m tensor of 'visible matter' containing a 'transparent part', allegedly mistaken for dark-matter. This further explains why ordinary and dark contributions to the e-m tensor are on the same order.
The strongest, most direct support for dark-matter comes from the dynamics of galaxies and clusters. The explanation of those within our model is that, as in actionat-a-distance electrodynamics [14,15], the ZPF at any point in empty space is due to the sum of individual fields, each "adjunct" to specific particles, implying that the ZPF in the neighborhood of galaxies and clusters should be stronger than the coarse-grained average ρ ZPF . This ultra-low-density, transparent component of the e-m tensor, when integrated over galactic scales, creats an illusion of 'dark-matter' whose distribution is determined by that of baryonic matter (see [9] for details). In [9,16], we argue that also when moving in the opposite direction in scale, towards the microscopic, the effects of such a feeble ZPF become consequential and are allegedly at the heart of virtually any quantum phenomenon.
Cold dark-matter further plays a critical role in the large-scale structure formation, according to ΛCDM. Our dark-matter model possibly provides a substitute in the form of a stronger feed-forward amplification of matter clustering: as matter falls into a relatively matter-dense region, it also increases the surrounding 'ZPF halo'. This energy halo warps space, inducing a MOND like, viz., long (-er than r −1 ) range potential which, in turn, increases the attraction basin of the over-dense region.
Finally, cold-dark matter is absolutely essential for ΛCDM to be compatible with the observed heights of the acoustic peaks.
Although not yet fully analyzed, the associated 'dark-matter' model [9] qualitatively agrees with these observations: The dominance of the first peak is due to the same feed-forward mechanism, mentioned above in the context of large-scale structure formation. The second peak is initially strongly suppressed, in comparison, by the deep potential well created by 'dark-matter'. The third peak therefore sees no such deep well, and so on (note that in the early universe, 1; hence, the expansion is effectively as in ΛCDM). However, as advocated in the introduction, the assumption that beyond the surface-of-last-scattering (s.o.l.s), terrestrial baryonic physics still applies, even if true, is highly speculative (in fact, the earliest direct evidence for baryonic matter is from cosmic dawn, at z ∼ 20). Nevertheless, as the electric charge of a particle is identically conserved, the idea that the origin of the CMB's anisotropy is plasma oscillation is both plausible and compatible with an epoch in which matter was not yet (fully) baryonic, but more 'jet like'. The 'jets' referred to are those charged particles seen in particle detectors just before they 'morph' into metastable particles. Those, we hypothesize, are just ECD solutions out of scale-equilibrium (with the ZPF), resulting from the trauma of the collision-solutions which do not represent known particles only with a time-varying scale. Likewise, the 'hyper trauma' of the big-bang (see below) has left matter out of equilibrium with the ZPF, but in a much deeper sense: The ZPF itself is out of equilibrium, hence the much longer time it took matter to attain a meta-stable equilibrium with it, compared with that measured in particle detectors. Recalling from Section 2.1 the role of the ZPF, as a middleman for e-m exchange between particles, matter and the ZPF are just different facets of a single entity-if one is unstable then so must be the other. As the matter-ZPF entity gradually attained its present-day equilibrium, scaling symmetry (8)-its CPT discrete subgroup included-became spontaneously broken. Baryonic matter dominating the universe then introduced a privileged local scale and an imbalance between matter and antimatter-two concepts which only apply to matter in scale equilibrium.
Preceding the primordial plasma oscillation, according to ΛCDM's popular extension, is cosmic inflation. That hypothetical epoch is incompatible with scale covariance, illdefined by its quantum nature (as discussed above), and invokes yet another form of energy-the inflaton.
An alleged triumph of inflation theory is its prediction of scale-invariant and Gaussian density perturbations of matter, seeding the plasma oscillation. Now, Gaussian noise is perhaps the most ubiquitous form of noise, so a greater challenge would be to derive a cosmological model predicting a non-Gaussian spectrum (roughly quoting Neil Turok). As for scale invariance, this is just what the scale-free epoch preceding τ s.o.l.s entails; the only spectrum retaining its form under a scale transformation, k → λk, is a (non-integrable) power law, P(k) ∝ k ν , for some ν.
Inflation theory is appended to ΛCDM in order to solve the latter's flatness and horizon problems. As is, the horizon problem would plague also our model, but also as is, our model is inapplicable to the early universe. As discussed following (18), in the conjectured, scale-free epoch of matter, there are many possible alternatives to (18) (or its relativistic counterpart) for which there is no horizon problem.
The flatness problem (formulated in terms of κ (47) as the notion of critical density is inapplicable in our case) is that for κ A to nearly vanish, it must be much more sounnaturally more so-in the early universe. One can argue that this is not really a problem as, both in our model and in ΛCDM, κ ultimately becomes much smaller than it has ever been, including today, so large relative variations in κ could be just nature's preference. However, for those who do not, our model can alleviate their unease, since comparing the values of κ at two distinct times is only meaningful so long as teleportation is welldefined. When the universe was in its scale-free epoch, possibly not much earlier than τ s.o.l.s , standard length gauges took a completely different form if it at all existed. It is therefore possible that the largest, well-defined relative value was still big, but not fantastically so.
Finally, lest the reader protest that the previous discussion is meaningless without entering the uncharted realm of quantum gravity, we mention that by 'relieving g µν from its metrical duty' via the slimness of our equivalence principle, the conceptual difficulties with quantum gravity disappear; space and time no longer have any meaning other than that which is related to the readings of clocks and other gauges. Quantum gravity then becomes just the statistical description of a classical, generally covariant e-m tensor (see [16] for the flat spacetime case). Now, it is true that the coarse graining operation, P µν →P µν , in our Friedmann model, is closely related to the ensemble average operation which is used [16] to establish CE as a plausible ontology underlying QM's statistical content, and that therefore quantum gravity could be helpful, if not essential, in determiningP µν . This extra knowledge, however, would boil down to a modified equation-of-state with regard to which we have pled ignorance.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Classical Derivation of the Luminosity Distance
By direct substitution, one can show that for an arbitrary function f , the following ansatz satisfies covariant e-m conservation (11) for the metric (14): Being traceless and locally propagating at light-speed, it is an immediate suspect for the e-m tensor associated with a spherical electromagnetic pulse, readily generalizing to k = 0 via r → S k (r) in the denominator. However, a perfectly spherically symmetric electromagnetic wave cannot exist; the closest there is to it is an incoherent superposition of plane-waves with random polarizations, creating a statistically spherical symmetry. Plugging, therefore, into Maxwell's Equation (10), a single such linearly polarized, planewave ansatz A µ = χ µ (τ) exp ik ν x ν , χ ν k ν = k ν k ν = 0 , results inχ µ − 2ik 0χµ = 0. Consistency with (A1) mandates χ µ ≡ const which, indeed, solves the equation. Up to some constant which depends on the choice of standard gauges, converting (A1) to the proper-canonical tensor amounts to dividing it by the local ρ ZPF (note that being a local ratio between two quantities of the same type, the ratio can be taken in any coordinate system). Taking some point, x B , along the null geodesic connecting x S and x A , the following is then evidently true: where r B is r (24) with x A → x B there. Letting x B approach x S , and using (28), we have independently of x B (equivalently, r B ). Implicit in (A4) is our assumption that the astronomer's luminosity measurement, L, of a teleported copy of S, equals to the luminosity reported by that astronomer, had he been teleported to x S . Plugging (A4) into (A3), we obtain Combined with (25) and (32) finally gives (29): Notes 1 The antisymmetry of F implies that solutions of Maxwell's equations exist for a conserved source only. 2 Transformation (8) is a formal symmetry of point-charges CE, written in a manifestly scale-covariant form [2,3]. 3 The most general (sensible) radiation field generated by a particle's electric current, j, is given by for α's constrained by α ret + α adv ≡ 1, where Kret adv are the advanced and retarded Green's function of (1), defined by g µν ∂ 2 − ∂ µ ∂ ν Kret adv νλ (x) = g λ µ δ (4) (x) , Kret adv (x) = 0 for x 0 ≶ 0 .
More accurately, the above two equations do not uniquely define K, but the remaining freedom can be shown to translate to a gauge transformation A → A + ∂Λ. 4 Much later work incorporated this addition of Einstein into the formalism of GR by 'dotting the commas' in the flat, scale noncovariant matter Lagrangian, which introduces into it a dependence on g µν and its first derivatives. Insofar as that representation is realistic, the slim equivalence principle then implies that coordinates do have a metrical meaning which is consistent with the privileged length scale introduced by matter. Note that, in the early days of GR, in the absence of even a candidate for such representation, Einstein's addition had to appear as a postulate. 5 Our definition (16) of ρ contains an extra a −2 relative the usual one. In Section 3.2, we recover the standard observable. 6 The second (...order) Friedmann equation,ä/a = 4/3 πGa 4 (ρ − 3p) − 2/3 Λa 2 − k, together with the first, gives (21), which is therefore exact, despite being only the leading order term in the fluctuations of (the exactly covariantly conserved) P. The reason for this is thatP is formally equivalent to the e-m tensor of a perfect fluid in a symmetric space. 7 This is implicit in standard cosmology, when the cosmological constant is seen as the fixed vacuum energy density. See, however, Section 4. 8 This follows from the 'conservation of photons': In the x coordinates, as in flat spacetime, the number of photons penetrating a sphere of radius R per oscillation of the pulse is independent of R. A non-vanishing z only means that the astronomer considers a unit oscillation as a longer period by a factor (z + 1), compared with an observer at S. 9 This includes G, as it appears in tandem with ρ in (22). In standard cosmology, the corresponding map is 6-to-4, and the degeneracy is removed by the choice of units and by setting either a 0 or |k| to 1. 10 Twenty parameters is a generous figure. Combining all the published and unpublished attempts to parameterize accelerators data, the community of high-energy physicists acted as a giant neural network trained against the data. That network had thousands of parameters, of which only twenty turned out significant. 11 The one place where a quantum field appears necessary and gives correct predictions (modulo some dubious infinity subtractions which, at any rate, are inapplicable in the case of the universe as a whole) is in the case of the Casimir force, which can be explained and derived much more directly as a Van der Waals force.