Varying Physical Constants , Astrometric Anomalies , Redshift and Hubble Units

We have developed a cosmological model by allowing the speed of light c, gravitational constant G and cosmological constant Λ in the Einstein filed equation to vary in time, and solved them for Robertson-Walker metric. Assuming the universe is flat and matter dominant at present, we obtain a simple model that can fit the supernovae 1a data with a single parameter almost as well as the standard ΛCDM model with two parameters, and has the predictive capability superior to the latter. The model, together with the null results for the variation of G from the analysis of lunar laser ranging data determines that at the current time G and c both increase as dG/dt = 5.4GH0 and dc/dt = 1.8cH0 with H0 as the Hubble constant, and Λ decreases as dΛ/dt = -1.2ΛH0. This variation of G and c is all what is needed to account for the Pioneer anomaly, the anomalous secular increase of the Moon eccentricity, and the anomalous secular increase of the astronomical unit. We also show that the Planck’s constant ħ increases as dħ/dt = 1.8ħH0 and the ratio D of any Hubble unit to the corresponding Planck units increases as dD/dt = 1.5DH0. We have shown that it is essential to consider the variation of all the physical constants that may be involved directly or indirectly in a measurement rather than only the one whose variation is being considered.


Introduction
Variation of physical constants is a subject that is marred with semantics: what exactly is varying and how is it being measured?There is an ongoing debate about dimensionful and dimensionless constants (e.g.Uzan [1,2], Duff [3], Chiba [4]).Our approach therefore would be to work mostly with easily comprehe nsible dimensionful constants and later on see if a meaningful relationship can be established for a common dimensionless parameter and how it evolves with time.The physical constants considered in this work are primarily the speed of light , the Newton's gravitational constant , the Einstein's cosmological constant Λ, the Planck's constant ħ, the Hubble constant  0 , and the fine structure constant .There is a plethora of literature discussing the variation, or lack thereof, of these physical constants and others, and there are excellent reviews on the subject [1][2][3][4][5][6].We will therefore limit ourselves to selected few with direct relevance to our work.
Varying physical constant theories gained traction after Dirac [7,8] in 1937 suggested such variation based on his large number hypothesis that related ratios of certain scales in the universe to that of the forces of nature.Magueijo [6] reviewed the variable speed of light (VSL) theories and their limitations in 2003 that included theories based on hard breaking of Lorentz invariance, biometric models, local Lorentz invariance, color dependent speed of light, extra dimension (e.g.brane -world) induced variation, and field functions.Farrell and Dunning-Davis [9] discussed in 2004 the VSL theories that were used as alternatives to the inflationary model of the universe and reviewed evidence for the same.
Maharaj and Naidoo [10] introduced variable  and Λ in Einstein field equation using Robertson-Walker metric in 1993.Belenchon and Chakrabarty [11] added the variation of  to develop a perfect fluid cosmological model in 2003.Recently (2017) Franzmann [12] developed an approach that included space as well as time dependence of the constants.More recently Barrow and Magueijo [13] proposed that the constants be considered as quantum observables in a kinematical Hilbert space.These works are mostly theoretical and do not directly offer how much exactly they vary and if they can directly explain some observations or measurements.Our focus will be to develop a model that can be used to explain certain anomalies , hitherto not explained satisfactorily, and the redshift vs distance modulus data on supernovae 1a better than alternative models.
The possible variation of the fine structure constant  has been of great interest as it is perhaps the most basic dimensionless constant in physics.Rosenband et al. [14] have put a constraint on the ̇/ = (−1.6 ± 2.3) × 10 −17 yr −1 derived from the constancy of the ratio of aluminum and mercury single-ion optical clock frequencies.More recently Gohar [15], using his entropic model of the universe and data on supernovae 1a, baryon acoustic oscillations, and cosmic microwave background, has established even more stringent constraint on the variation of .Additionally, he states that in his model  and  should be increasing with the evolution of the universe, which corroborates our findings in this work.Similar constraints on  were shown by Songaila and Cowie [16] from the observation of narrow quasar absorption lines at redshift  > 1.5.There is significant amount of work on the subject most of it can be found referenced in the papers cited above.
If  does vary, no matter how small is the variation, it is normal to ask what causes its variationelectric charge ,  or ħ?We will show that since  and ħ variations cancel out, it is  that should be considered responsible for the variation of  if there is any.
We will solve the Einstein field equation with varying ,  and Λ with Robertson-Walker metric in Sec. 2, and show that Λ ̇/Λ = −1.2and  ̇/ = −0.6where  is the Hubble parameter.Based on the Hofmann and Müller's [17] determination of a very tight constraint on the variation of  from the analysis of laser lunar ranging data of more than 40 years, we will establish in Sec. 3 that  ̇/ = 5.4 and / = 1.8.Sec. 4 is devoted to the derivation of the expression for distance modulus  of an intergalactic light emitting sources in terms of its redshift .Sec. 5 delineates the methodology for fitting the  −  data and apply the same to the new model, the variable ,  and Λ model (say VcGΛ model), and for comparison also to the standard ΛCDM model.Having shown that the VcGΛ model fits the supernovae 1a  −  data almost as well as the ΛCDM model and has predictive capability better than the latter, we will proceed to demonstrate that the model can explains the three astrometric anomalies that have not yet been explained satisfactorily.All we need to explain these anomalies is  ̇/ = 5.4 0 and / = 1.8 0 at current time with  0 as the Hubble constant.
The first anomaly we will consider here is the Pioneer anomaly, which refers to the near constant acceleration back towards the sun observed when a spacecraft cruises on a hyperbolic path away from the solar system (Anderson et al. 1998 [18]).Many explanations have been given for such an anomaly but none appears to be satisfactory and are difficult to incorporate in the models used for real time spacecraf t astrodynamics.Principal among these explanations are as follows: a) Turyshev et al. [19] in 2012 tried to explain the anomaly as being due to the recoil force associated with an anisotropic emission of thermal radiation off the spacecraft.However, it is not clear why it should be the same for Pioneer 10/11, Galileo and Ulysses spacecrafts.b) Feldman and Anderson [20] in 2015 have used "the theory of inertial centers" [21] to develop a model to compute the anomaly.c) Kopeikim [22] in 2012 has used Hubble expansion of the universe to address the anomaly and given a reason why one should see deceleration rather than acceler ation of the spacecraft due the expansion of the universe.These approaches are rather circuitous and depend on many assumptions to explain the anomaly.Feldman and Anderson [20] have allocated 12% of the total anomalous acceleration of 8.74 ± 1.33 × 10 −10 m  −2 to various thermal contributions, leaving 7.69 ± 1.17 × 10 −10 m  −2 that requires other explanations.In Sec. 6 we will try to explain this unexplained Pioneer acceleration.
The lunar laser ranging technique has improved to the extent that it can determine the lunar orbit with an accuracy of better than a centimeter.The Moon's orbit has an eccentricity that depends on the tidal forces due to surficial and the geophysical processes interior to the Earth and the Moon.After all the known sources responsible for the eccentricity  were included, Williams and Dickey [23] in 2003 estimated that there remained a discrepancy of Δ̇= (16 ± 5) × 10 −12 yr −1 between the observed and calculated values.This value was revised downward by Williams and Boggs [24] in 2009 to Δ̇= (9 ± 3) × 10 −12 yr −1 and by Williams, Turyshev and Boggs [25] in 2014 to Δ̇= (5 ± 2) × 10 −12 yr −1 with updated data and tidal models.With additional terrestrial tidal modeling, William and Boggs [26] in 2016 were able to further reduce the number and stated that it might even be negative.While these authors possibly felt that unexplained secular increase of the eccentricity was due to the deficiency in their model and therefore a better tidal modeling should eliminate it, others feel that the anomaly may be pointing to some unknown physical process.There have been attempts to resolve the anomaly using Newtonian, relativistic and modified gra vity approaches [27][28][29] as well as using some unfamiliar gravitational effects [27].Reviews by Anderson and Nieto [30] in 2009 and Iorio [31] in 2015 have covered the above and additional attempts to solve the problem.It appears that none of the models secularly affect the lunar eccentricity.Attempts of cosmolog ical origin were also not successful [22,29,32,33].We attempt to explain this anomaly in Sec. 8 with the varying  and  approach developed here.
The anomalous secular increase of astronomical unit AU was first reported by Krasinsky and Brumberg [34] in 2004 as dAU/dt = (15±4) m cy −1 from the analysis of all radiometric measurements of distances between the Earth and the major planets they had available over a period of 1971-2003, which included the observations of martian landers and orbiters.They noted that unexplained secular increase in AU might point some fundamental features of space time that is beyond the current cosmological understanding according to which the Hubble expansion yields dAU/dt = 1 km cy −1 .This value is almost two orders of magnitude higher than observed.Their theoretical analysis revealed that the relativistic calcul ations that included the gravitational shift of proper time gave null results.Anderson and Nieto [30] in 2009 corroborated Krasinsky and Brumberg's [34] findings.They showed that the effect of the loss of solar mass on AU is miniscule and will cause the AU to shrink rather than increase (dAU/dt = -0.34cm cy −1 ).Iorio [31] in 2015 reviewed the status of the AU anomaly in significant details and concluded that, considering the various unsatisfactory attempts to explain the anomaly and the new IAU definition of astronomical unit, the anomaly no longer exists (just by the virtue of new definition).We show in Sec. 8 that the AU anomaly based on the old definition can be easily explained with the new approach.

Evolutionary Constants Model
We will develop our model in the general relativistic domain starting from the Robertson-Walker metric with the usual coordinates   (, , , ): Here () is the scale factor and  determines the spatial geometry of the universe: . The Einstein field equations may be written in terms of the Einstein tensor   , metric tensor   , energy-momentum tensor   , cosmological constant Λ, gravitational constant  and speed of light , as: When solved for the Robertson-Walker metric, we get the following non-trivial equations for the flat universe ( = 0) of interest to us here, with  as the pressure and  as the energy density: If we do not regard ,  and Λ to be constant and define  ≡   2 , we may easily derive the continuity equation by taking time derivative of Eq. (2.4) and substituting in Eq. (2.3) It reduces to the standard continuity equation when  and Λ are held constant.And since the Einstein field equations require that the covariant derivative of the energy-momentum tensor   be zero, we can interpret Equation (2.5) as comprising of two continuity equations [10], viz 8 ̇+ Λ ̇= 0. (2.7) This separation simplifies the solution of the field equations (Eqs.2.3 and 2.4).Eq. (2.6) yields the standard solution for the energy density  =  0  −3(1+) .Here  is the equation of state parameter defined as  ≡  with  = 0 for matter, 1/3 for radiation and −1 for Λ.
As has been explicitly delineated by Magueijo in several of his papers (e.g.Magueijo 2000), this approach is not generally Lorentz invariant albeit relativistic.Strictly speaking we should have used the Einstein-Hilbert action to obtain correct Einstein equations with variable  and .Thus, one may consider the current formulation quasi-phenomenological.
Since the expansion of the universe is determined by () ≡ /, it is natural to assume the time dependence of any time dependent parameter to be proportional to / (the so called Machian scenario -Magueijo [6]).Let us therefore write (2.9) Here  (not the same as in Eq. (2.1))  and  are the proportionality constants, and subscript zero indicates the parameter value at present ( =  0 ).With this substitution in Eq. ( 2.4) we may write Comparing the exponents of the only time dependent parameter a of all the terms, we may write 2 =  − 3 − 3 = , and with  = 0 for matter, we have 2 =  − 3 = .Thus, if we know , we know  and .
We can now have a closed analytical solution of Eq. (2.10) as follows (since ( 0 ) ≡ 1): (2.12) Here  is the deceleration parameter.It may be noticed that  does not depend on time, i.e.  0 = .As we know the radiation energy density is negligible at present, so we need to be concerned with the matter only solutions, i.e. with  = 0.
The deceleration parameter  0 has been analytically determined on the premise that expansion of the universe and the tired light phenomena are jointly responsible for the observed redshift, especially in the limit of very low redshift [36].One could see it as if the tired light effect is superimposed on the Einstein de Sitter's matter only universe rather than the cosmological constant [37].By equating the expressions for the proper distance of the source of the redshift for the two, one gets  0 = −0.4.Then from Eq. (2.12) we get  = 1.8, and also  = −1.2, and  = −0.6.We thus have from Eq. (2.8)  ̇/ = 1.8,Λ ̇/Λ = −1.2and  ̇/ = −0.6.

Varying G and c Formulation
Having determined the value of  = 1.8, and since the Hubble parameter is defined as  = ̇/, we may write from Eqs. (2.8) and (2.9)  =  0  1.8 , and   ̇= 1.8. (3.1) We may also write explicitly The findings from the Lunar Laser Ranging (LLR) data analysis provides the limits on the variation of  ̇/ (7.1 ± 7.6 × 10 −14 ) [17], which is considered to be about three orders of magnitude lower than was expected [7,8,38].However, the LLR data analysis is based on the assumption that the distance measuring tool, i.e. the speed of light, is constant and non-evolutionary.If this constraint is dropped then the finding would be very different.
As is well known [39], a time variation of  should show up as an anomalous evolution of the orbital period  of astronomical bodies expressed by Kepler's 3rd law: Since LLR measures the time of flight of the laser photons, it is the right hand side of Eq. (3.5) that is determined from LLR data analysis [17] to be 7.1 ± 7.6 × 10 −14 and not the right hand side of Eq. (3.4).
Then, taking the right hand side of Eq. (3.5) as 0 and combining it with Eq. (3.2), one can solve the two equations and get  ̇/ = 5.4 and / = 1.8.It should be emphasized that both  ̇/ and / are positive and thus both of them are increasing with time rather than decreasing as is generally believed (e.g.ref. 7,8,40).This may be considered as the most significant observational finding of cosmological consequences just by studying the Earth-Moon system.

Redshift vs. Distance Modulus
The distance  of a light emitting source in a distant galaxy is determined from the measurement of its bolometric flux  and comparing it with a known luminosity .The luminosity distance   is defined as In a flat universe the measured flux could be related to the luminosity  with an inverse square relation  = /(4 2 ).However, this relation needs to be modified to take into account the flux losses due to the expansion of the universe through the scale factor , the redshift , and all other phenomena which can result in the loss of flux.Generally accepted flux loss phenomena are as follows [41]: a. Increase in the wavelength causes the flux loss proportional to 1/(1 + ), and b.In an expanding universe, an increase in detection time between two consecutive photons emitted from a source leads to a reduction of flux proportional to , i.e. proportional to 1/(1 + ).Therefore, in an expanding universe the necessary flux correction required is proportional to 1/(1 + ) 2 .The measured bolometric flux   and the luminosity distance   may thus be written as: = (1 + ).( 0 / 0 )[1 − (1 + ) −1.4 ] .(4.9) Thus the expression for  to be substituted in Eq. ( 4.3) to determine the luminosity distance of the source is  =    .
Since the observed quantity is distance modulus  rather than the luminosity distance   , we will use the relation Here  0 ≡  0 / 0 and all the distances are in Mpc.
We will compare the new model, hereafter referred as VcGΛ (variable ,  and Λ) model, with the standard ΛCDM model, which is the most accepted model for explaining cosmological phenomena, and thus may be considered the reference models for all the other models.Ignoring the contribution of radiation density at the current epoch, we may write the distance modulus  for redshift  in a flat universe for the ΛCDM model as follows [42]:  = 5log [ 0 ∫ /√Ω ,0 (1 + ) 3  (4.12) Here Ω 0, is the current matter density relative to critical density and 1 − Ω ,0 ≡ Ω Λ,0 is the current dark energy density relative to critical density.

Supernovae Ia z-µ Data Fit
We tried the VcGΛ model developed here to see how well it fits the best supernovae Ia data [43] as compared to the standard ΛCDM model.The data fit is shown in Figure 1.The VcGΛ model requires only one parameter to fit all the data ( 0 = 68.90± 0.26 km  −1 Mpc −1 ) whereas the ΛCDM model requires two parameters ( 0 = 70.16± 0.42 km  −1 Mpc −1 and Ω ,0 = 0.2854 ± 0.0245).
The data used in this work is the so called Pantheon Sample of 1048 supernovae Ia in the range of 0.01 <  < 2.3 [43].To test the fitting and predictive capability of the two models, we divided the data in 6 subsets: a)  < 0.5; b)  < 1.0; c)  < 1.5; d)  > 0.5; and e)  > 1.0; and f)  > 1.5.In addition, we considered the fits for the whole data.The models were parameterized with subsets a), b) and c).The parameterized models were then tried to fit the data in the subsets that contained data with z values higher than in the parameterized subset.For example if the models were parameterized with data subset a)  < 0.5, then the models were fitted with the data subsets d)  > 0.5, e)  > 1.0 and f)  > 1.5 to examine the models' predictive capability.The Matlab curve fitting tool was used to fit the data by minimizing  2 and the latter was used for determining the corresponding  2 probability [44] .Here  2 is the weighted summed square of residual of : where  is the number of data points,   is the weight of the th data point  , determined from the measurement error   , in the observed distance modulus  , using the relation   = 1/  ,

2
, and (  ;  0 ,  1 ,  2 . . ) is the model calculated distance modulus dependent on parameters  0 and all other model dependent parameter  1 ,  2 , etc.As an example, for the ΛCDM models considered here  1 ≡ Ω ,0 and there is no other unknown parameter.
Figure 1.Supernovae Ia redshift  vs distance modulus µ data fit using the VcGΛ model as compared to the fit using the ΛCDM model.Table 1.Parameterizing and prediction table for the two models.This table shows how well a model is able to fit the data that is not used to determine the model parameters.The unit of R 0 is Mpc and of H 0 is km s -1 Mpc -1 .P% is the χ2 probability in percent that is used to assess the best model for each category; higher the χ2 probability  better is the model fit to the data.R 2 is the square of the correlation between the response values and the predicted response values.RMSE is the root mean square error.Highest P% value in each category is shown in bold and the cell highlighted.We then quantified the goodness-of-fit of a model by calculating the  2 probability for a model whose  2 has been determined by fitting the observed data with known measurement error as above.This probability  for a  2 distribution with  degrees of freedom (DOF), the latter being the number of data points less the number of fitted parameters, is given by: where Γ is the well know gamma function that is generalization of the factorial function to complex and noninteger numbers.Lower the value of  2 better is the fit, but the real test of the goodness-of-fit is the  2 probability ; higher the value of  for a model, better is the model's fit to the data.We used an online calculator to determine  from the input of  2 and DOF [45].Our primary findings are presented in Table 1.The unit of the Hubble distance  0 is Mpc and that of the Hubble constant  0 is km s -1 Mpc -1 .The table is divided in four categories vertically and four categories horizontally.Vertical division is based on the parameterizing data subset indicated in the second row and discussed above.The parameters determined for each model are in the first horizontal category.The remaining horizontal categories show the goodness-of-fit parameters for higher redshift subsets than used for parameterizing the models.Thus this table shows the relative predictive capability of the two models.The model cells with the highest probability in each category are shown in bold and highlighted.

Pioneer Anomaly
Having determined the values of  ̇/ and / we can now proceed to calculate the anomalous acceleration towards the Sun of Pioneer 10 and 11 spacecraft [20].Since gravitational pull of Sun on the spacecraft decreases according the inverse square law,  Ġ cannot be expected to give a constant acceleration independent of the distance of the spacecraft.If the acceleration is denoted by , one can easily workout using the Newtonian relation  = / 2 , that  ̇/ = −3.74× 10 −19  −1 , which yields negligible anomalous acceleration.Thus, we need to only consider the effect of / from a different perspective.If the spacecraft is at a distance  0 from Earth then the signal from Earth will have a two way transit time Δ given by 2 0 =  0 Δ assuming  0 as the speed of light.But, if the speed of light is evolving as / = 1.8 0 near  =  0 , i.e. as  =  0  1.8 0 (− 0 ) during the transit time then the actual transit time will be shorter than Δ (since  >  0 for  >  0 ).Because of the shorter actual transit time, an observer would consider the spacecraft to be nearer to Earth than it actually is and thus would think that there is a deceleration of the spacecraft due to some unaccounted cause.
We could write the proper distance of the spacecraft   and its apparent distance   as: and since 1.8 0 Δ ≪ 1, Thus the acceleration is -6.129 × 10 −10 m s −2 , and since it is negative, it is towards the observer at Earth.Out of 7.69 ± 1.17 × 10 −10 m s −2 anomalous acceleration of Pioneer 10 and 11 towards the Sun (truly towards Earth) we are able to analytically account for 6.13 × 10 −10 m s −2 , leaving only 1.56 ± 1.17 × 10 −10 m s −2 as the anomaly.
It should be mentioned that Kopeikin [22] has obtained essentially the same result and explained it as due to the cosmological effect of quadratic divergence between the electromagnetic and atomic time scales governing the propagation of radio wave in the Doppler tracking system and the atomic clock on Earth respectively.However, his approach is not conducive to explaining the other two anomalies.

Moon's Eccentricity Anomaly
The eccentricity  of the orbit of Moon may be written as [46]  = √1 + Here  = −/2  is the specific orbital energy,  = (  +   ) here is the gravitational parameter for Earth-Moon system,  =  ×    /  is the specific relative angular momentum,   is the semi-major axis of the orbit,   is mass of the Earth,   is the mass of the Moon,  is the radius vector and  is the velocity vector of the Moon, and   =     /(  +   ) is the reduced mass.Taking   =  and assuming  is normal to , we may write Eq. (7.1) as .Since all the parameters are expressed at current time,  in the denominator must be expressed in terms of the Hubble time 1/ 0 .However, it is better to write  =  0 −1 where (1 − ) is the small factor very close to 0 that may be considered to correct for the approximations made in our model ( here is not pressure).We can also determine For  = 1, / 0 = 0.0285715.And taking  0 = 0.716 × 10 −10 yr −1 , and  = 0.0549 for Moon, we get ̇= 37 × 10 −12 yr −1 .This is about twice the original value of the anomalous rate of eccentricity increase The value determined is very sensitive to the value of the parameter .We have therefore plotted dimensionless eccentricity variation / 0 against  in Figure 2. It can be approximated near  = 1 with an expression: Ḣ 0 = 0.0285714 + 2.65306( − 1) . (7.7) There are three values of ̇ that are significant here: ) ̇= 16 ± 5 × 10 −12 yr −1 originally estimated by Williams and Dickey in 2003 [23]; it gives  = 0.993855 ± 0.001445.) ̇= 9 ± 3 × 10 −12 yr −1 updated value using more data and 'better' tidal effect model by Williams and Boggs in 2009 [24]; it gives  = 0.991832 ± 0.000867.) ̇= 5 ± 2 × 10 −12 yr −1 updated value with even more data and 'even better' tidal effect model by Williams et al. in 2014 [25]; it gives  = 0.990676 ± 0.000578.All the values of  are very close to 1, indicating that our model is very good approximation to the exact solution of the Einstein field equations, at least locally, with variable  and .Even lower and negative values of ̇ derived by Williams and Boggs in 2016 [26] can be easily explained with this approach.The question remains -is it the tidal models deficiency these authors were trying to correct or trying to prove there could not be any other physical cause for the anomaly?

Astronomical Unit (AU) Anomaly
The orbit of Earth around Sun is Keplerian and thus is governed by Eq. (3.3).A radiometric measurement will therefore yield a null result in our approach using evolutionary  and .However, the eccentricity evolution is equally valid for the Moon and the Earth.Thus applying Eq. (7.6) to the Earth's orbit, and assuming  = 1 and  = 0.0167, we get ̇= 122.5 × 10 −12 yr −1 .This can be translated easily into AU increase as follows.The semi-major axis  and semi-minor axis  of an orbit may be written as Suppose now that the eccentricity  increases by Δ to ′ in a time period Δ.Then  ′ =  + Δ and  ′2 =  2 + 2 2 Δ when we ignore higher order terms in Δ.We may now write the increase in AU as ΔAU: Here we have approximated   =  since  2 ≪ 1. Taking  = 1.496 × 10 11 m and using Eq.(7.6) for ̇ with  = 1, we get / = 0.77 m cy −1 against its measured value of 1.5 m cy −1 .As can be seen from Eq. (7.7) and Figure 2, if we took  = 1.010 instead of 1, we would get the desired value.The reason could be the same as discussed at the end of previous section.Alternatively, there may be other phenomena contributing to the anomalous AU.
It should be mentioned that recently the AU has been redefined (e.g.ref. 47) and just by definition the AU anomaly has been eliminated.One has to resort to the old definition of AU to appreciate the AU anomaly and its resolution.

Planck Units and Hubble Units
We will now review the Plank and Hubble units of time, length, mass, charge and temperature and then explore how the dimensionless constant relating the two units evolve in time.This means that the dimensionless  that relates Hubble units and Planck units is increasing in time.

Discussion and Conclusion
As should be expected, the two parameter ΛCDM model is able to fit any data set better than the one parameter VcGΛ model.What is unexpected is that when parameterized with a relatively low redshift data the VcGΛ model is able to fit the higher redshift data better than the ΛCDM model.This shows that the second parameter in the latter, while trying to fit a limited dataset as best as possible, compromises the model fit for data not used for parameterizing.This means that the ΛCDM model does not have as good a predictive capability (i.e. the capability to fit the data that is not included for determining the model p arameters) as the VcGΛ model despite having twice as many parameters as the VcGΛ model.In addition the VcGΛ model has the analytical expression for the distance modulus  unlike the ΛCDM model which must be evaluated numerically.One would notice that while  0 (and hence  0 ) values are relatively stable with the parameterizing dataset containing higher and higher redshift values, varying no more than 0.35%, while the variation in the Ω ,0 is up to 9.4%, i.e. 27 time larger.This confirms that Ω ,0 parameter, and hence Λ through Ω Λ,0 , is an artificially introduced parameter to fit the data rather than being fundamental to the ΛCDM model.In contrast Λ is an integral part of the VcGΛ model.Since  (≡ / 2 ) and Λ are related through Eq. (2.7), one could easily derive that the Λ term contributes 60% for the VcGΛ model against 70% for the ΛCDM model.

)
Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 5 April 2019 doi:10.20944/preprints201904.0064.v1where is semi-major axis of the orbit,  is the gravitational constant and  is the mass of the bodies involved in the orbital motion considered.If we take time derivative of Eq. (3.3), divide by  2 and rearrange, we get  may be considered associated with the Hubble time (i.e.1/) as are other quantities.We may now rewrite Eq. (3.4) as

(4. 3 )
How does  compare with and without varying ?Let us first consider the case of non-expanding universe.The distance from the point of emission at time   to the point of observation at time  0 may be written as   = ∫    0   .Therefore for constant  =  0   0 =  0  0
=  =  defines a circle when  = 1.AU may then be written as Here  0 is the permittivity of space, ħ is the Planck's constant and   is the Boltzmann constant.If we divide any of the Hubble units by the corresponding Planck unit we always get the dimensionless quantity, say , as is 61 orders of magnitude larger than the corresponding Planck unit.Taking time derivative of the equation and dividing by itself, we get Since  is measured by electromagnetic waves, the measuring tool for distance, the speed of light, enters in it, i.e.  = , or ̇=  = ṫ +  , or