1. Introduction
Constructing a theory that unites successfully the ideas behind general relativity and quantum mechanics is expected to yield a new understanding of physical reality. Currently, there are conceptual and technical problems that one is forced to face when thinking of such fusion. However, progress in observational cosmology seems to provide new puzzles along with the necessary guidance for the development and verification of the relevant physical models [
1,
2,
3].
Understanding the small positive value of the cosmological constant
and the associated energy density
is one of the biggest problems in modern physics, qualitatively and quantitatively. Currently, the Quantum Field Theory (QFT) estimate of
is about 120 orders of magnitude bigger than the actually measured value [
1,
2,
3,
4,
5]. There is a big variety of models that are attempting to explain the value of
. Most models consider dynamical mechanisms to generate
as zero-point energy (vacuum energy) of a matter field. In the 1980s, there were few very appealing general arguments that
should be zero or can be viewed as an integral of the motion, some arguments were based on quantum gravity [
6], some on supersymmetry [
4], or reparametrization invariance [
5]. However, experimental data have shown that
is actually small and positive:
which is
times smaller than the Planck’s scale energy density:
Here,
is the critical energy–density,
is the Hubble expansion rate, and
based on observations [
2,
3,
7]. The definition of the critical energy–density is a naturally important quantity—it separates the open from the closed cosmological solutions within the cosmology based on Friedmann–Lemaitre–Robertson–Walker (FLRW) metric.
Seeing the enormous discrepancy above is just the numerical manifestation of a fundamental problem in our understanding of nature. Thus, looking at different viewpoints and lines of reasoning can be enlightening. Let’s briefly sketch a few of them: of course, one shall start with the source: Einstein and his “biggest blunder”. As the legend has it, Einstein introduced the cosmological constant into his equation in order to counteract the gravitational pull of the matter encoded by —the stress–energy–momentum tensor. determines the geometry of the space-time via the metric tensor , which is a solution to the Einstein equation, and the associated metric connection that provides the covariant derivative of various tensors. In Einstein’s GR, the metric tensor and the connection are in special relation and are utilized to build the Einstein tensor with the help of the scalar curvature and the Ricci tensor constructed from the Riemann curvature tensor .
The Einstein’s equation
relies on the matter conservation laws
and the uniqueness of the Einstein tensor with the same property
; then, the constant of proportionality
is determined from the Newtonian limit of the theory. According to this reasoning, the Einstein’s equations are uniquely determined and illustrate the idea that the matter distribution and motion determine the metric tensor
and thus the structure of the space-time. In particular, when there is no matter and only vacuum
, the solution corresponds to Ricci flat spacetime since
and thus
. However, if there is matter, then, depending on the initial conditions, the solutions to the Einstein’s GR equations will correspond to systems that re-collapse or expand forever. Thus, the desire for a static solution prompted the introduction of a term
into the Einstein’s GR equation
, where
is a constant since the new term has to obey the conservation laws already satisfied by
and
along with the postulate
. When
is introduced in the model, as illustrated by the above line of reasoning, the result is a phenomenological theory with a constant parameter
that has to be determined by experiment and observations. Furthermore, if one is to estimate
by using dimensional reasoning based on the relevant system parameters such as
c,
G, and the size of the Universe via the use of the Hubble constant
, then one obtains an energy density (
3) that is very close to the experimental value above (
1):
Here, the numerical factor of
can be justified as the solid angle being part of the volume measure for the energy density in 3D space. Therefore, by inspection of (
1) and (
3), one can conclude that
and the Hubble constant
should be one of the fundamental constants of nature. It is important that the expression above is based on simple dimensional reasoning. However, one can get a very similar result (
) by considering the LFRW equations that are derived from the Einstein field equations with cosmological constant by using the Robertson-Walker metric ansatz and neglecting the curvature and matter energy–density terms. Note that such approximations are standard handwaving estimates in building simple models of nature. The Robertson–Walker metric is the most general homogeneous and isotropic metric (up to time-dependent conformal factor), thus consistent with the cosmological principle that the Universe is homogeneous and isotropic at very large scales. One can neglect the curvature and matter energy–density terms since there seems to be very little matter in the Universe at very large scales and the Universe seems to be large enough to appear practically flat. In this respect, the cosmological constant is a property of an expanding, flat, and matter empty Universe, which except for its expanding property is what one would call a classical vacuum.
An alternative point of view is to reason that, even if one neglects the contribution of the matter and electromagnetic radiation to the stress–energy–momentum tensor
, thus setting
as for classical
matter vacuum, there is still energy and momentum within the space-time due to the possible presence of gravitational waves. Therefore, there should be a non-zero source on the right-hand side of the Einstein equation
1 that should be related to the presence of a metric field
. This way, the parameter
of proportionality has something to do with stress–energy–momentum tensor for gravity in space-time of the classical matter vacuum. Solution to such equation has been introduced by de Sitter and has illustrated that matter may not be the only source of gravity and thus wrinkles in the space-time may not be due to matter only. Furthermore, such solutions illustrate that vacuum solutions do not have to be Ricci flat but instead can correspond to non-zero constant scalar curvature
. This way one extends minimally the Einstein’s GR framework in the sense that space-time can be spatially flat, homogeneous, and isotropic, but with a non-zero constant scalar curvature. The possibility of non-flat universe has recently received observational support [
8]. Thus,
can be viewed as a fundamental constant which is somehow related to the energy–density of the vacuum. Furthermore, since the Einstein original equation,
, now reads as
, therefore, the stress–energy–momentum tensor can be viewed as a tensor that contains (non-classical) non-zero vacuum part:
Due to the signature of the metric
, such vacuum has a very special equation of state, relationship between pressure and energy–density,
that behaves thermodynamically well upon adiabatic expansion and compression. This point of view implies that
is constant. Thus,
is related to the cosmological constant in a very simple way
[
9]. Unfortunately, the scale of the energy–density that one can deduce based on the relevant fundamental constants like
c,
G, and the Planck constant
ℏ, results than in
(
2), which is about 120 orders of magnitude bigger than the actually measured value of
(
1).
The above discussion demonstrates that “old fashioned” dimensional reasoning provides a reasonable value for
and the corresponding energy–density
, while the quantum field theory based reasoning results in an enormously different energy–density value. The smallness of the observed cosmological constant and the related energy–density is one of three puzzling facts. If the cosmological constant is really due to quantum effects, then the other puzzle is why quantum energy fluctuations are not manifesting at the scale of applicability of the general relativity. The third question is related to the comparable value of
to the overall matter density
, which is about a factor of three given that
. It seems that the last two questions could be addressed well within the unimodular gravity approach to Einstein’s equations [
10]. While the smallness of
could be justified in few possibly equivalent ways: via the Universe as a Black Hole idea where the large entropy in the Universe results is small
(
5), or via the quantum corrected Raychaudhuri equation [
11], or as a second-order quantum correction to the Newtonian gravity [
12,
13], or as related to the relevant observables at the infrared stable fixed point of the 4D gravity at large distances [
14]. It may not be a surprise that all these could be equivalent, but, to the best of our knowledge, no one has been able to tie the knot on their equivalence and to derive the observed value unambiguously.
Finding an explanation of the value of has been the focus of many research papers and projects. A somewhat favorite approach nowadays is based on the weak anthropic principle, which is consistent with the above demonstrated value. Thus, the Hubble constant is a parameter of the system while the fundamental constants c, G, and ℏ are really fundamental. This way, the Planck length and time are “unique” once the Planck system of units is chosen, while the Hubble constant is somehow related to the details of how much stuff is in the system and what are the initial conditions of the relevant processes. Systems near equilibrium often have a characteristic scale that is a result of a few different competing factors. In physics, this is often reflected by the kinetic and potential energy of a system or the balance of various forces. In this respect, a reasonable approach that can link the Planck scale to the observed size of the Universe should then deliver a resolution to the Cosmological Constant Problem (CCP) above and may shed light on the problem of quantum gravity.
It is natural to expect that the value of the cosmological constant would be determined or justified within Quantum Cosmology (QC) or at least QC will provide some initial understanding of why it is so small. The problem is that QC involves path integrals over 4D-geometries. While path integrals seem to make sense in physics—they are not yet well-defined mathematical objects. It seems, however, that progress in causal dynamical triangulations [
15,
16] may provide a method to compute such complicated path integrals over 4D-geometries.
The focus of the current paper is the construction of a partition function over a
characteristic-size variable that should be equivalent to a path integral over 4D-geometries when all other characteristics of the corresponding universe are integrated out. Such research is motivated by the progress in causal dynamical triangulations [
15,
16] and the possibility to numerically simulate various space-time geometries for the purpose of testing quantum cosmology models. Thus, far in this paper, the Cosmological Constant Problem (CCP) has been introduced and a few of the directions, which researchers are exploring as resolutions to this problem, have been briefly mentioned. The next section outlines five of the main categories as discussed by Weinberg [
5] with a particular focus on the Weak Anthropic Principle and Quantum Cosmology as well as an additional category of geometry related/motivated approaches for justifying the value of
. Then, the mathematical structure of Quantum Cosmology is briefly discussed along with how the cosmological constant and the matter vacuum energy can be absorbed in an effective cosmological constant. Once the Euclidean partition function over geometries has been constructed, one can map the path-integral partition function onto one-dimensional integral over the
characteristic size of each possible universe—
Multiverse Partition Function. Finally, the implications of this partition function and possible interpretations of the
characteristic size of a universe are discussed.
2. The Three Main Model Groups and the Seven Principal Model Categories
The Planck scale is a well-accepted scale where something new and interesting should start showing up; however, it may not be applicable as a scale for estimating the vacuum energy–density fluctuations relevant to the large-scale cosmological phenomenon. It may well be the same reason for which a matter density of a material is correctly estimated as macroscopic quantity and, if one uses sub-atomic scale measurements, then the density of the material would exhibit significant variations. Thus, without a suitable procedure/model for extrapolating from one scale to another, the results may be drastically incorrect. As a result, there are many different approaches based on various ideas as explanations of the value and the origin of the non-zero cosmological constant.
In a 1989 review paper, Weinberg discussed five main categories [
5]: (a) Super-fields and super-potential approaches based on Supersymmetry, Supergravity, and Superstrings; (b) Anthropic considerations associated with processes in the Universe related to the observed age, mass density, and other astrophysical observations; (c) Adjustment/Tuning mechanisms based on a scalar field and its evolution over the history of the Universe; (d) Changing Einstein GR/ Modifying Gravity either by restricting the dynamical degrees of freedom associated with the metric tensor and/or the relevant symmetry transformations or by adding new terms to the Action that result in a cosmological constant term as a constant of integration; (e) Quantum Cosmology as pertained to estimating the probability of observing a specific field configurations and relevant field expectation values build on appropriate effective actions.
Each of these five categories can be grouped further either as high-energy/short-distance scale phenomena (a) & (c), or long-time/large-distance scale phenomena (b) & (d), or presumably all scales (c, d, e). While the models from the first two groups are self-evident and are expected to be extendable into the all-scales group, these models are originally devised, developed, and tested or expected to hold in their primary domain. The models like (c, d, e) are built with the aim and hope to be applicable to all scales.
The five Weinberg’s categories are the natural classification based on where the Occam’s razor principle could lead us to in terms of the minimal and sufficient change towards resolution of the CCP. The minimal and sufficient change may come from extension to the Einstein’s GR theory of Gravity, category (d) above or to the theory behind the matter fields that generate the stress–energy tensor, category (a) above; the new theory, however, should be consistent with observations about our Universe described in category (b). The successful model will probably have an effective description in terms of the category (c). Finally, if the model is to be a bridge towards fusing the principle of general relativity and quantum mechanics that is applicable to the Universe at a larger scale than it should in some sense be related to quantum cosmology or may define what this category (e) means.
Models that have a mechanism of connecting short to long scale phenomena are known in the literature, for example, T-duality in string theory, or models that are the same at any scale, i.e., fractal or conformal geometry [
17,
18]. Since the physics deduced from astronomical observations at large distances is expected to be somehow related to the physics that can be studied at short distances, for example in any on/near Earth laboratory, then such scale mapping procedure should play a role in models that may resolve the CC problem. In this respect, models that exhibit, scale invariance, conformal symmetry, T-duality, or fractal geometry should be another (sixth) category of models—(g) geometric models [
17,
18,
19,
20,
21]. Of course, there is always the possibility to consider models that don’t fit in any of these six categories the (seventh) category type—(f) further futuristic/exotic models category.
Relatively recently, the Weinberg’s classification was discussed and extended further in a review paper by Li et al. [
22] by adding more symmetry based models or string-theory motivated models that either expand the matter sector, or modify gravity, or add further examples to the geometry related model category (g) discussed in the paragraph above. For example, Wetterich’s scaling invariance via dilatation field, or the Blackhole Self-Adjustment model that can be viewed as a variation on the theme of Blackhole Cosmology models. The Holographic principle category in Li’s paper has a significant overlap with the geometry related model category (g) which can also absorb the Back-Reaction category in Li’s paper. Some of the Phenomenological models (Quintessence, K-essence, and so on) could also be viewed as members of the category (c) adjustment/tuning mechanisms category by Weinberg, or the (f) category of futuristic models that expand the matter fields models either in the particle physics sector or the type of fluid(s) (Chaplygin gas, viscous fluid, super-fluid condensation) involved in the relevant Freedman equations.
The fact that researchers have been exploring various models, like string theory tuning mechanisms for bouncing brane-world scenarios, Blackhole self-adjustment, Holographic principle, and Back-Reaction models, is adding weight to the importance of the category (g) of geometry related/motivated approaches for justifying the value of
. If a model has a way to relate the short-scale
r processes to large-scale
R processes, i.e., as the T-duality in string theory, there will be a fundamental constant of dimension
such that (
), and therefore an energy density associated with it (
). If this scale is the string tension, as in string theory, then one still has an enormous energy–density of the string vacuum compared to the observed value. In a similar way, the Black-Hole related reasoning [
22] results in an energy–density expression (
) that has the correct numerical factor of 3 which gives support to the Blackhole Cosmology ideas where our Universe is considered to be essentially the interior of a huge Blackhole [
20,
21,
23]. Thus, this idea could be used to explain the value of the observed energy density, if one uses the future event horizon
, and thus the remarkable agreement of dimensional argument (
3) with the observation result (
1). If one considers the Blackhole Cosmology, then there is an alternative explanation of the cosmological constant
as related to the entropy of the Universe rather than the energy–density of the vacuum. Such approach can use the entropy of a de Sitter Space as derived by Gibbons and Hawking [
24]
and match it to the corresponding expression for a blackhole
:
in units
, where
is the Boltzmann constant,
is the Planck length, and
is the area of the event horizon
R of the blackhole. The result is then
, where
is the characteristic size of the de Sitter spacetime with a cosmological constant
[
25]. This justifies the simple dimensional results discussed after (
3) as related to the entropy of the Hubble horizon
when employing results from path integral quantum gravity considerations [
24]. This relation between
and
is expected to be influenced minimally upon the specific details of the final quantum gravity theory because the relations discussed are about the entropy of the de Sitter spacetime which is the asymptotic limit of our Universe. Thus, while (
5) could provide a new viewpoint on the smallness and the value of
, the unimodular gravity approach to Einstein’s equation [
10] could provide the resolution to the other two puzzles about the cosmological constant. Alternatively, the smallness of
could be related to second-order quantum corrections to the equations of motion either via the quantum corrected Raychaudhuri equation [
11], or as a second-order quantum correction to the Newtonian gravity via the use of the quantum hydrodynamic approach [
12,
13]. Such an approach is very appealing due to its economy in terms of exotic new stuff and as a member of the category (d) above where the cosmological constant is related to the quantum vacuum fluctuations that induce effectively conformal effects via the non-zero trace of the corresponding energy–momentum matter tensor.
The numerical agreement between (
1), (
3), and (
5), however, is not sufficient to establish the validity of a model or the idea behind it as outlined in the example above. In order to do so, one has to build a model that can provide a description to the vast body of observational data to the level compatible with what the
CDM model can provide. Recently, a model based on the idea of scale-invariance of the vacuum (SIV) and the Weyl’s conformal gravity idea has been in development and has been showing a promise to resolve the dark-matter problem dynamically and possibly without the need for dark-matter particles [
26,
27]. Connecting the SIV theory to the unimodular gravity may be a fruitful approach to resolving the various dark puzzles in cosmology and gravity. Further fortification of the argument for such connection can be justified given the fact that the unimodular gravity can be viewed as a particular gauge fixing of the conformal factor as a constraint [
28]. In this respect, the specific SIV realization of the Weyl Integrable Geometry with suitable conformal factor is equivalent to the Einstein GR equation with traceless energy–momentum for matter and a Cosmological Constant term originating from a non-flat background characterized by non-zero Ricci scalar which absorbs the over all trace contribution of the matter energy–momentum tensor. Such non-flat Lorentzian backgrounds are expected to be viable stable vacuum configurations for quantum gravity [
28]. The studies by Mottola and his collaborators seem to be intimately related to the understanding of the trace of the stress–energy tensor (
4), and thus to the value of
as induced by the “energy” of the gravitational field. The study of the trace-anomaly-induced dynamics of the conformal factor of four-dimensional (4D) quantum gravity [
14] seems to be closely related to the scale-invariance of the vacuum (SIV) and the Weyl’s conformal gravity idea explored by the authors of the current paper in a different context [
29]. Unfortunately, despite the simple relation between the field contents of these studies (
), we couldn’t finalize the correspondence on the level of the equation content within the short time frame we have been studying Mottola’s work for the current paper. Establishing the connection is an important step and has to be done carefully and accurately, which usually takes significant time, effort, and other relevant resources.
Models that successfully compete with the
CDM model are effectively satisfying the Weak Anthropic Principle. That is, the parameters of the model do allow for good description of the observed physical reality, even though part of the parameter space may not be consistent with the existence and evolution of Life as we know it. Current understanding of the inflationary Universe model suggests that there could be the possibility of a Multiverse where various universes are perpetually created as the inflation field expands and decays locally. As a result, the laws of physics may differ in different universes and we just happen to live in a Universe that has the observed structure. When the anthropic reasoning is properly utilized, it can help us understand the correlations between various observational facts and the corresponding numerical representations of the data [
5,
30].
An alternative multiverse scenario is the Blackhole Cosmology idea where each Universe is a Blackhole that is either a daughter Universe or parent Universe. This way the Blackhole universes are disconnected universes, but the laws of physics are close in the generationally related universes. The above multiverse options are interesting due to their implication about the overall history and fate of the Universe and therefore are more consistent with viewing the cosmological constant as constant of integration related to the initial conditions of the system or a byproduct of a specific cancellation mechanism.
An alternative equally radical view point is based on the idea of self-similarity and scale relativity as related to fractal geometry [
17] where the value of the cosmological constant is related to the scale of the classical electron radius, thus a byproduct of a specific phase transition mechanism. Regardless of the mechanism and the origin of the specific value of the cosmological constant, there is always a scale that should characterize a quasi-static cosmological constant value.
Quantum Cosmology (QC) is an approach aiming at understanding the Universe at the larger scale by using the ideas behind general relativity and quantum mechanics. It has been used previously to justify that the cosmological constant is probably zero. Some initial arguments were based on supersymmetry [
4] and quantum gravity [
6], while reparametrization invariance considerations were used to demonstrate that the cosmological constant can be viewed as an integral of the motion [
5]. The QC approach was viewed as the most promising road back then, but the experimental observation has raised a red flag over this method. Some QC studies argued that the zero Cosmological Constant result is not that much a statement about what should be observed as the most likely value of
but more of a statement about the asymptotic “far future” value of the model parameter
corresponding to model based on Baby Universes [
31]. Another argument in support for the zero Cosmological Constant as related to the asymptotic infrared stable fixed point for 4D gravity has been given very rigorously and much earlier then in [
31] by Mottola and his collaborators [
14,
15].
Subsequent adjustments to the Multiverse approach seem to be aligned with observational support for inflation. In the next few sections, an approach based on the isolation of a characteristic scale as an integration parameter is considered for the purpose of estimating the expected average size of “typical” universe as part of a Multiverse ensemble.
3. Cosmological Constant within Quantum Cosmology
The Einstein equations with cosmological constant
:
can be derived from the Einstein–Hilbert action supplemented with the matter action
:
The first term above is the standard Einstein–Hilbert action for a metric gravity
with cosmological constant
while the second term is the matter action
that depends on the metric
but not on its derivatives and thus it determines the stress–energy tensor
but does not contribute to the left-hand side of (
6). In
, the matter fields are denoted by
and their derivatives are
. The symbol
is a placeholder for the variety of possible matter fields like those in the standard model of elementary particles, such as leptons, quarks, and interaction bosons. The equations obeyed by the matter fields within given background spacetime with a metric
are obtained upon the variation of the relevant fields when the appropriate functional form of
is considered.
The expectation value of an observable
is then defined via a partition function that is a formal Feynman path integral with a functional integration measure
over the space of varius possible and relevant metrics and matter fields:
When the matter fields
are integrated out, one expects
to be proportional to the metric tensor
which leads to an effective
for an empty Universe [
5,
32]:
Here,
denotes the constant of proportionality between
and the metric tensor
. Therefore, it is related to the effective energy–density
when the quantum fields
are integrated out. If one is to consider the trace of the effective energy–momentum tensor
, then one has
. It seems reasonable to assume that the trace operation commutes with the process of integrating out the fields
which leads to the relationship
, where
is the trace of the energy–momentum tensor. If one chooses traceless formulation for the Einstein GR equations, then the
appears as constant of integration related to the Ricci scalar curvature
R and the trace
T of the energy–momentum tensor [
5].
In units where the speed of light is and the metric tensor has time-like signature the energy–momentum tensor for an ideal fluid is given by the expression: . Here, is the energy–density, p is the pressure, and are the components of the for-velocity of the fluid with and in the Newtonian limit of a slow motion. Then, the trace of the energy–momentum tensor becomes , which turns into if an equation of state is considered.
Based on the conservation laws in the scale-invariant theory, which are also reduced to the usual conservation laws [
26], the overall energy–density scaling
indicates five special values of
w. The five values can be grouped as follows, three of which are particularly important: radiation
, pressureless matter
and dark energy
The first two, radiation and matter, are familiar non-negative pressure fluids. The negative
w results in negative pressure systems that are not common laboratory fluids. The other two special cases,
(string defects) and
(domain wall defects) [
33], correspond to
and
scaling, while the dark energy
corresponds to constant energy density independent of
r; thus, it does not scale with the size
r of the system for
. This is also the equation of state of the locally Lorentz invariant vacuum.
It is interesting to notice that the dark-energy equation of state is implied by the assumption , which cannot be satisfied neither by radiation nor by pressureless dust alone. The trace of the radiation energy–momentum tensor is zero due to for radiation; therefore, radiation cannot contribute to the value of . If the energy–momentum tensor is traceless, then and it would have implied (relativistic matter only) and would be related to the Ricci scalar R. Effectively, this is the case of the very, very early Universe where matter and radiation were in the state of very-hot quark–gluon plasma. However, the current Universe is far from such state.
If one assumes that locally
, then integrating out the matter fields
will result in
, which implies
. Thus, the effective dark energy–density
, of radiation and matter, cannot be more than a quarter of the overall total energy–density
when
. Notice that this provides an interpretation of the dark-energy as an effect arising from the integration of the matter fields and possible cancellation mechanism for bringing the effective energy–density
arbitrarily close to zero depending on the overall distribution of
and the relevant local values of
. Unfortunately, the inequality
implies that the energy–density of the dark-energy field
is less then the radiation and matter energy–density
when
, which contradicts the current state of the observed Universe. Of course, for the dark-energy field when
, one has
, thus,
while radiation and matter can be added on top of it as perturbations and one can consider
. However, if one insists on
but without dark energy (
), then
is needed to have
, but it is still not enough to agree with observations. For example, given that
, to have
to be about two to three times as much as
with three species
each with an equal amount of energy–density
, one can only get to
since
.
2 Note that dark-matter has been already included by considering
. Given the lack of laboratory observations of dark species obeying
, a non-zero cosmological constant
must be present to explain the cosmological observation. Therefore, there must be non-zero Ricci curvature
associated with it. Such consideration is consistent with viewing the effective cosmological constant as a constant of integration [
5]:
Therefore, since empty space (the vacuum) is dominating the Universe, then one can integrate the matter fields and use an effective
that has an implicit contribution from the matter (
9). For flat space-time (
), this would imply that the value of the effective cosmological constant should be due to the trace part of the stress energy–momentum tensor
and should somehow be deducible from the matter distribution. Then, the smallness of
would imply that it is a perturbative effect rather than a leading order effect as implied by (
2). In fact, models based on estimates of the quantum fluctuations using quantum potentials and quantum hydrodynamic formalism are pointing to such behavior when one is looking at the quantum corrected Raychaudhuri equation [
11], as well as the Classical Klein–Gordon Field [
12], and the Spinor-Tensor Gravity of the Classical Dirac Field [
13]. If one adopts the above viewpoint as the main sources of the non-zero vacuum energy density, then one can accept the zero cosmological constant argument based on ideas from the 1980s and 1990s, and then add a perturbative effect of sparse matter distribution on top of it.
However, from the point of view of the usual QC, the effective cosmological constant
, the origination of the non-zero value is irrelevant since flat spacetime (
) as well as non-flat spacetime (
) are viable stable Lorentzian background vacua for path-integral based Quantum Gravity models [
28]. Thus, in (
6) and (
7), one can set
and
, which effectively is the vacuum Einstein field equation (EFE) with the corresponding action:
Most modern vacuum cosmology models are derived from (
10) for specifically chosen metric tensor
g. For homogeneous and isotropic Universe, the Ricci scalar curvature is expected to be constant or at most time dependent only. If one considers a vacuum solutions to (
6), then one arrives at (
10), which implies that
. Quantum cosmology considers all reasonable possibilities for the metric tensor
g within the appropriate path integrals [
10,
14,
15,
24].
It is usually assumed that (
10) provides a satisfactory description of the physical reality at cosmological scales. Therefore, the main contribution to the action (
11) comes from the “classical trajectories” satisfying (
10). In this respect, one can approximate the action (
11) by considering an integration of (
10) and its implication for (
11):
One can justify the partition function
Z in the familiar form (
) by switching to imaginary time, Euclidean metric
, and, assuming the separation of the four-volume integral into time integral and three-volume integral by using the following relationships:
In view of the above expressions, one expects that the partition function for a set of metrics
g satisfying (
10) is given in terms of the corresponding Euclidean metric
[
5]:
When evaluated using a Hartle–Hawking no-boundary condition in complexified spacetime, one gets to 4-spheres of radius
and obtains a probability density distribution [
5,
24,
34]:
Based on this expression of the probability density distribution, Hawking has concluded that the cosmological constant is probably zero [
5,
34]. Various approaches have been suggested on improving the probability density distribution since the measured value of
has turned out to be very small but non-zero. One of the most current approaches have been focused on the 4-volume cutoff measure of the multiverse [
35]. The discussion in [
34] considers also an anti-symetric tensor
that is one way to arrive at unimodular gravity [
5,
10]. The above result (
16) relies on the observation that
where the characteristic size
is actually the 4-spher radius
. The overall geometric dimensionless factor of proportionality is denoted by
. For example, this factor is 1 for standard Euclidian n-cube, and
for the volume of a 3D sphere. Calculations for the tunneling probability from “nothing” into something with a 4D size
considering wormholes, using Euclidian action and imaginary time, have resulted in the expression
, while the use of real time and WKB-approximation result in
instead [
6,
31]. It has been argued that the more appropriate probability expression is
[
36]. The sign difference can be traced to whether there is a minimum size or maximum size of the 4D manifold as well as how the overall path integral measure is handled [
28]. In the forthcoming discussion, we will utilize (
15) and will not impose such relationship (
) between characteristic size
and
. Our goal is to estimate the typical size of an universe within the multiverse approach. The calculations discussed in the next section show that the typical size of an universe is of the order of the Planck scale and therefore the Quantum Field Theory estimates of the vacuum energy density (
2) is a reasonable expectation.
4. Multiverse Partition Function and the Typical Size of a General -Universe
Consider an ensemble of universes (Multiverse) where each universe is a solution to the Einstein field Equation (
10) and is labeled by
. Here,
is an auxiliary tag, a characteristic size, to a given EFE solution
. The characteristic size can be determined by measuring the biggest distance between two spacial points, or the age of the Universe
, or its 4D volume
, at a particularly important moment of its evolution and so on. The definition of
is loose and fuzzy in order to fulfill its purpose to be an enumerator for various solutions of the Einstein field equations. In order to be able to set up the integration variables in the partition function in a manageable form, one can introduce an equivalent set of fields and variables:
where
the second power of
a is a matter of convenience;
a is simply related to
since the distance between two points along an
s-parametrized curve is:
Thus, . The choice is particularly important and will be discussed later.
Furthermore, notice that, if
solves the Einstein field Equation (
10), then
would also be a solution to the Einstein field Equation (
10) where
and
are simply related as
. This follows from the fact the affine connection
is invariant under rescaling of the metric tensor and so are the Riemann curvature tensor
and the Ricci curvature tensor
, but the scalar curvature
rescales as
which is effectively
. One simple way to see that the affine connection
is invariant under rescaling of the metric tensor
is to notice that
is built by using partial derivatives of the metric tensor
; thus,
where
is the inverse of
and therefore it scales with
; that is,
. Another way to justify
is to recall that
is scale invariant [
26], which can be guaranteed only if
when
given the Equations (
8) and (
4).
Given an ensemble of solutions , where has been determined by using a particular measurement procedure/definition, one can split the ensemble into same-scale -subset, for example members, and a-derivable members, such that . Let’s call the subset—the Planck-scale seed universes subset.
The goal now is to write a partition function
Z in terms of the variables
by using guidance from
(
15). By following the mapping outlined in (
14), one has:
Since each
a-derivable member is built from a particular
solution
, then its contribution to the functional integrals is easily taken into account through the variable
a. Therefore, it seems reasonable to define
Z and
in Planck units (
) in the following way:
Here, is an overall geometric factor that encodes most of the geometric information related to Usually, is of order one. For example, for Euclidean geometry , for geometry However, for a spherical blackhole with period in the imaginary time and event horizon at , one has . In general, one can consider classes of metrics for which has the same value. Then, the “path-integral” over that class would result in the “volume” of this class of metrics and can be used as a new integration variable. Thus, the Jacobian for this change of variables would be the metric volume . Since, as already mentioned, is of order one usually; thus, one can neglect, for the present study, its dependence on the metric. However, in the future, one may consider as an independent degree of freedom along which one would have to integrate.
Since the Planck-scale subset consists of Planck-scale universes, it is plausible that at such sub-microscopic scale the quantum field theory estimates of the vacuum energy density is appropriate; that is,
is of order 1. Thus, if for each solution
there is Planck-scale seed Universe such that
a is chosen so that
,
in the range
, which is compensating for fixing
, and
; then, evaluating (
21) and (
22) results in the average characteristic size
of this
a-derivable subset to be of order one as well (
when
). Note that, according to (
21) and (
22), one has
, while
.
One can easily see this upon changing the order of the integration and by performing the following substitution that makes the integration over
very easy numerically by considering the Plank-seed universe class
that results in a universe of characteristic size
. Thus, the argument of the exponential function can be written from
into
. We will be performing the integration over
while keeping
and assuming
and
in the range
, which is consistent with the Plank-seed universe class idea, while
.
where we have used
assuming we can move the actual value of
into the value of
without that changing the range
significantly but only rearranging the elements there.
In this respect, any particular solution of the EFE could be viewed either as a-derivable element connected to a particular Planck-scale seed solution or it could be a Planck-scale seed solution itself. This means that the multiverse ensemble can be enumerated by Planck-scale seed universes and their a-derivable elements.
The scale of the observed Universe is clearly far larger than the Planck-scale, thus our Universe is a-derivable Universe based on some Planck-scale seed Universe so one can expect its characteristic size to be related to its present size, which is in Planck units ; this gives and therefore or in Planck units. These values are clearly much closer to the measured value .
5. Discussions and Conclusions
One may expect further improvements in
by adding quantum effects via higher order terms in the Einstein–Hilbert action that are beyond the approximation (
13), which considers only classical fields satisfying (
10), and by making more rigorous calculations using
(
15) instead of
(
21), or at least understanding better the mapping (
14) since one is not guaranteed that
is a constant in general. This, however, would require the use of the full theory of quantum gravity whose structure and mathematical apparatus are not yet fully understood. Nevertheless, some important results are already available as pertained to a dynamical solution of the cosmological constant problem—Ref [
14], along with the possibility of numerical conformations using causal dynamical triangulations [
15,
16].
The reasoning above shows that the quantum field theory estimate of the vacuum energy density is probably valid only for sub-microscopic (Planck size) universes; however, for macroscopic size systems, one needs to use the
a-scaling mapping to arrive at the observed value of
. An open question is the strict definition of the characteristic scale and its meaning. It seems that there are many options: one can think of the characteristic scale
as the size to which a given universe has inflated during the inflation era, or the size at which general relativity has become a good enough approximation so one doesn’t need quantum gravity, or the size at which
is dominated by the zero point energy of the gravitational field and all the matter contributions are negligible, or the size of the future cosmological horizon, and so on and so on. This issue, however, could be short circuited if one promotes the characteristic scale
a into a conformal gauge factor
as part of the Integrable Weyl Geometry framework. For example, using FLRW metric:
For non-zero Gaussian curvature
k, one can consider a small enough patch of space
to be Gaussian flat, that is,
. Then, by switching to conformal time
using
, one can identify conformal factor
such that the new metric is conformally equivalent to the Minkowski metric and therefore a unimodular. In the case of
, such interplay has been described explicitly in the discussion of the application to the empty space solution [
26].
Integrating out the matter fields is an essential step in the above calculations. When applied in conjecture with the above reasoning, it leads to a value of
, which is very close to the measured value perhaps due to the fact that the value of
calculated in this paper is mainly due to the vacuum energy density of the classical gravitational field in an
a-derivable universe. It seems that the above reasoning and results are consistent with the view that the effective cosmological constant (
9) is an integral of the motion with a significant contribution from the Ricci scalar. Even more, it may be more beneficial and less puzzling if one is considering the smallness of
to be due to the large value of the entropy in the observed Universe. Furthermore, if one is using the traceless version of the Einstein’s equations, then the zero point energy of the matter fields does not contribute to the vacuum energy density of the classical gravitational field. A unimodular gravity approach to Einstein’s equations is well aligned with this view [
10] and, if combined with the view that the cosmological constant is about the entropy of the system and not about its vacuum energy density as suggested by the BlackHole Cosmology (
5), then one has a plausible resolution of the main three puzzles about the cosmological constant.
In summary, the key points of the paper are: the introduction of the Multiverse Partition Function for an ensemble of universes enumerated by their characteristic size a that is also used as integration variable in the partition function. The averaged characteristic scale of the ensemble is estimated by using only members that satisfy the Einstein field equations. The averaged characteristic scale is compatible with the Planck length when considering an ensemble built on Planck-scale seed universes with vacuum energy density of order one; that is, . For a-derivable universes with a characteristic scale of the order of the observed Universe , the cosmological constant is in the range , which is close in magnitude to the observed value .
The result that the overall averaged characteristic size of an ensemble of universes that satisfies the Einstein Field Equations (
10) and (
11) is of the order of the Planck scale is not obvious in general. Explaining the observed value of
as due to the possibility that we live in a very large
a-derivable universe might suggest that this is a low probability event. Thus, it may seem like an argument in support of the anthropic principle. However, if the physical reality is such that the idea of a characteristic scale
is actually an illusion of our practical models, then the overall argument would not be applicable. For example, a scale-invariant paradigm based on the Weyl Integrable Geometry, like the one discussed by the authors as an alternative to
, would make all homogeneous and isotropic
a-derivable universes, based on a specific EFE homogeneous and isotropic solution
, part of the same Universe via their time-dependent conformal factor
[
26]. Such an approach is clearly modifying significantly the main framework (
10) and (
11), their consequent equivalent expressions, and of course the final expressions (
21) and (
22) that have lead to the result that the overall averaged characteristic size of an ensemble of universes is of the order of the Planck scale. Furthermore, utilizing the possibility of a conformal gauge factor
could lead to unimodular gravity gauge choice where one considers the scale factor
a in
as an overall scaling field
such that
. Exploring the possible connection between unimodular gravity gauge and the Weyl Integrable Geometry by using the viewpoint on the conformal perturbations described by Mottola and his collaborations [
14,
28] is an interesting research direction that could bring new light to the Cosmological Constant problem, as well as on the Dark Matter problem via the SIV paradigm and probably a new understanding of the SIV conformal factor
within the framework of quantum gravity.