Next Article in Journal
The Variable Sky Through the OGLE Eye
Previous Article in Journal
A Systematic Search for New δ Scuti and γ Doradus Stars Using TESS Data
Previous Article in Special Issue
Ball Lightning as a Profound Manifestation of Dark Matter Physics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss

by
Nathalia M. N. da Rocha
1,*,
Andre L. B. Ribeiro
1,* and
Francisco B. S. Oliveira
2,*
1
Departmento de Ciências Exatas, Universidade Estadual de Santa Cruz, Rodovia Jorge Amado km 16, Ilhéus 454650-000, Bahia, Brazil
2
Departmento de Engenharias e Computação, Universidade Estadual de Santa Cruz, Rodovia Jorge Amado km 16, Ilhéus 454650-000, Bahia, Brazil
*
Authors to whom correspondence should be addressed.
Universe 2025, 11(9), 303; https://doi.org/10.3390/universe11090303
Submission received: 11 July 2025 / Revised: 12 August 2025 / Accepted: 2 September 2025 / Published: 6 September 2025

Abstract

The Hubble tension, a persistent discrepancy between early and late Universe measurements of H 0 , poses a significant challenge to the standard cosmological model. In this work, we present a new Bayesian hierarchical framework designed to meticulously decompose this observed tension into its constituent parts: standard measurement errors, information loss arising from parameter-space projection, and genuine physical tension. Our approach, employing Fisher matrix analysis with MCMC-estimated loss coefficients and explicitly modeling information loss via variance inflation factors ( λ ), is particularly important in high-precision analysis where even seemingly small information losses can impact conclusions. We find that the real tension component ( T r e a l ) has a mean value of 5.94 km/s/Mpc (95% CI: [3.32, 8.64] km/s/Mpc). Quantitatively, approximately 78% of the observed tension variance is attributed to real tension, 13% to measurement error, and 9% to information loss. Despite this, our decomposition indicates that the observed ∼ 6.39 σ discrepancy is predominantly a real physical phenomenon, with real tension contributing ∼ 5.64 σ . Our findings strongly suggest that the Hubble tension is robust and probably points toward new physics beyond the ΛCDM model.

1. Introduction

The Hubble constant ( H 0 ) represents one of the most fundamental parameters in cosmology, determining the expansion rate of the Universe today and serving as a key calibrator for cosmic distance measurements, e.g., [1]. It is a local quantity measured at redshift z = 0 , establishing the size and age scale of the universe and linking redshift to time and distance. During the past decade, a significant and intriguing discrepancy has emerged, commonly referred to as the “Hubble tension”. This tension lies between the predictions for H 0 derived from early universe probes, based on the standard Lambda Cold Dark Matter (ΛCDM) cosmological model, and those obtained directly from local late universe observations [2,3,4]. This discrepancy has persisted despite continuous improvements in observational techniques and has reached a level that challenges the standard cosmological model ΛCDM, detected at a statistical level of high significance [5,6,7].
In this work, we specifically focus on the persistent discrepancy in measurements of H 0 obtained from early Universe probes, exemplified by the Cosmic Microwave Background (CMB) data from the Planck collaboration, and those derived from local late Universe observations, primarily through Type Ia Supernovae (SNIa) distance ladder measurements from the SH0ES collaboration. These two leading methods provide the primary observational input for our analysis of the Hubble tension, allowing us to investigate its underlying components.
Currently, the most precise predictions for H 0 from the Cosmic Microwave Background (CMB) by the Planck collaboration in combination with the South Pole and Atacama Cosmology Telescopes yield H 0 = 67.24 ± 0.35 km/s/Mpc [8], while distance ladder measurements based on Type Ia supernovae (SNIa) calibrated with Cepheid variables from the SH0ES collaboration obtain H 0 = 73.17 ± 0.86 km/s/Mpc [5]. This difference of ∼5.94 km/s/Mpc corresponds to a statistical significance of approximately 6.4 σ when uncertainties are taken at face value [9]. The robustness of this tension is further highlighted by various other independent probes that provide measurements across this spectrum: Baryon Acoustic Oscillations (BAO) combined with Big Bang Nucleosynthesis yield H 0 = 67.9 ± 1.1 km/s/Mpc [10], largely consistent with CMB values. Megamaser-based measurements give H 0 = 73.9 ± 3.0 km/s/Mpc [11], aligned with the distance ladder. Meanwhile, cosmic chronometers report intermediate values around H 0 = 69.8 ± 1.9 km/s/Mpc [12].
The persistent nature of this tension has led to numerous investigations of potential systematic errors in early or late universe measurements [13,14]. Recent analyses using alternative distance calibrators such as the Tip of the Red Giant Branch (TRGB) have yielded intermediate values of H 0 = 69.8 ± 0.8 km/s/Mpc [15], with more recent works like Jensen et al. (2025) [16] providing additional constraints from TRGB + SBF methods. These suggest possible systematics in the Cepheid calibration. In contrast, studies exploring modifications to the CMB analysis find that standard extensions to ΛCDM do not readily resolve tension [17]. The launch of the James Webb Space Telescope (JWST) has opened a new chapter in the measurement of extragalactic distances and H 0 , offering new capabilities to explore and refine the strongest observational evidence that contributes to tension, where errors in photometric measurements of Cepheids along the distance ladder do not significantly contribute to tension [18].
Beyond purely systematic considerations, the tension may be affected by what we term “information loss errors”, subtle biases arising from the simplified models used to interpret complex cosmological data. When observations are compressed into a single H 0 value, assumptions about other cosmological parameters can introduce model-dependent biases that are not fully captured in the reported uncertainties [19,20]. Such effects may artificially inflate the apparent tension between different probes. Furthermore, various theoretical models have been proposed as potential solutions, including early dark energy [21], modified gravity [22], and new physics in the neutrino sector [23]. However, many of these models struggle to simultaneously accommodate all cosmological observations without introducing new tensions in other parameters [24]. Thus, a critical question emerges: How much of the observed Hubble tension represents a genuine physical discrepancy requiring new physics, and how much might be attributed to systematic errors or information loss in the analysis pipeline? Addressing this question requires a framework that can decompose the observed tension into its constituent components while accounting for the possibility that quoted uncertainties may be underestimated. The broader landscape of cosmological tensions and proposed solutions is comprehensively reviewed in the recent literature, including the CosmoVerse white paper (2025) [25], which provides a valuable context for understanding the challenges posed to the standard cosmological model.
In this paper, we develop a Bayesian hierarchical model that explicitly parameterizes three contributions to the observed tension: (1) standard measurement errors, (2) information loss errors arising from model simplifications, and (3) real physical tension that might indicate new physics beyond ΛCDM. By simultaneously analyzing multiple cosmological probes, CMB, SNIa, BAO, and H(z) measurements, we can assess the robustness of the Hubble tension and quantify the probability that it represents a genuine challenge to the standard cosmological model. Our approach builds on previous Bayesian treatments of cosmological tensions [26,27,28] but extends them by explicitly modeling information loss and allowing probe-specific inflation factors that account for potential underestimation of uncertainties. This methodology enables us to assess the statistical significance of the Hubble tension in a framework that accommodates realistic assessments of both statistical and systematic uncertainties. This new framework is designed to meticulously decompose the observed tension into its constituent parts: standard measurement errors, information loss arising from parameter-space projection, and genuine physical tension. The framework does not propose new physics solutions; instead, it serves as a diagnostic tool to quantify the nature of the discrepancy, particularly highlighting the portion that cannot be explained by known statistical and information-theoretic effects.

2. Theoretical Justification for Error Component Separation

In our decomposition model, we express the observed tension between two cosmological probes as a sum of three distinct components:
T o b s A B = T r e a l + E m A B + E i A B
where T r e a l represents the physical tension, E m A B denotes the statistical measurement error (with E [ ϵ A B ] = 0 and V a r ( ϵ A B ) = ( E m A B ) 2 ), and E i A B is the information loss error. In the following, we provide a rigorous mathematical justification for treating E i and E m as separable components.

2.1. Hierarchical Measurement Process Framework

We begin by considering the complete data generation process in a hierarchical Bayesian framework. For any cosmological probe i, the measurement process can be described as:
p ( D i | θ ) = N ( D i | f i ( θ ) , m e a s i )
where D i represents the observational data, θ = H 0 , Ω m , Ω Λ , is the full set of cosmological parameters, f i ( θ ) is the theoretical prediction function, and m e a s i is the measurement covariance matrix that captures the statistical uncertainty inherent in the observation process.
The posterior distribution for the full parameter space is given by:
p ( θ | D i ) p ( D i | θ ) p ( θ )

2.2. Information Loss Through Marginalization

When cosmological analyses report a constraint on the Hubble constant, they typically provide the marginalized posterior:
p ( H 0 | D i ) = p ( θ | D i ) d θ ¬ H 0
where θ ¬ H 0 represents all parameters except H 0 . This equation defines the marginalized posterior distribution of H 0 . This is a standard procedure in Bayesian inference, where one integrates over “nuisance” parameters ( θ ¬ H 0 ) to obtain the distribution of a parameter of interest from a higher-dimensional joint posterior distribution p ( θ | D i ) . This marginalization process inevitably leads to information loss when the parameters are correlated in the full posterior. To clarify, in a Bayesian context, the variance of a parameter (its uncertainty) can be significantly reduced when strong correlations exist with other parameters. When we marginalize over these correlated parameters, we effectively integrate out the information provided by these dependencies. Statistically, the conditional variance of H 0 (its uncertainty when other parameters θ ¬ H 0 are fixed) is always less than or equal to its marginal variance (its uncertainty when θ ¬ H 0 are integrated out): Var ( H 0 | D i ) Var ( H 0 | D i , θ ¬ H 0 ) . The "information loss" quantifies this increase in uncertainty, which arises because we are no longer leveraging the predictive power of the correlations to constrain H 0 . In complex cosmological models, where strong degeneracies between parameters like H 0 and Ω m are common, this effect is particularly significant, leading to an inflation of the perceived uncertainty on H 0 that is not accounted for by standard measurement errors alone.
The variance of the marginalized H 0 distribution is necessarily greater than or equal to the conditional variance if other parameters were fixed:
V a r ( H 0 | D i ) V a r ( H 0 | D i , θ ¬ H 0 )
The equality holds only when H 0 is independent of all other parameters in the posterior. Indeed, the difference V a r ( H 0 | D i ) V a r ( H 0 | D i , θ ¬ H 0 ) quantifies the increase in variance of H 0 that arises purely from the process of marginalization. This increase is precisely what we define as the information loss, a statistical penalty or the loss of constraining power due to not fully leveraging the correlations between H 0 and the other parameters in the full model space. We can therefore rigorously define the information loss component as follows:
E i , i 2 = V a r ( H 0 | D i ) V a r ( H 0 | D i , θ ¬ H 0 )

2.3. Inflation Factors as Proxies for Uncertainty Underestimation

In our model, we introduce variance inflation factors λ i to account for the possible underestimation of the reported uncertainties. It is crucial to clarify that the term “inflation” here refers to a statistical increase in variance or uncertainty and is entirely distinct from the cosmological epoch of cosmic inflation. These factors λ i are unitless multipliers that scale the variance reported in a measurement.
σ t o t a l i = λ i · σ i 2
This equation defines the total standard deviation σ total i by applying the variance inflation factor λ i to the reported variance σ i 2 . The choice of applying λ i to the variance ( σ i 2 ) rather than the standard deviation ( σ i ) is fundamental for the propagation of statistical errors. In statistical analysis, the variances of independent error sources are additive. Therefore, if λ i quantifies an overall inflation of uncertainty, it must operate on the variance. This implicitly means that the total variance is ( σ total i ) 2 = λ i · σ i 2 . This form is particularly convenient, as it allows us to decompose the total variance into the reported variance plus an additional variance component: ( σ total i ) 2 = σ i 2 + ( λ i 1 ) σ i 2 . The term ( λ i 1 ) σ i 2 then explicitly represents the additional variance attributed to effects such as unmodeled systematic errors or information loss, which are not captured by the originally reported σ i 2 . As such, these variance inflation factors effectively encompass these unmodeled systematic errors and information loss due to marginalization.
The variance decomposition follows:
( σ t o t a l i ) 2 = σ i 2 + ( λ i 1 ) σ i 2
where σ i 2 represents the variance reported (measurement error) and ( λ i 1 ) σ i 2 represents the additional variance of the underestimated uncertainties. It is important to note that our framework specifically models information loss, which implies λ 1 . Scenarios where uncertainties might be overestimated (i.e., λ < 1 ) represent a different type of systematic effect that falls outside the scope of our current information loss modeling, which focuses on variance inflation.

2.4. Separation of Error Components

In our decomposition model, we define the observed tension between two cosmological probes as a sum of three distinct components: real physical tension ( T r e a l ), statistical measurement error ( E m ), and information loss error ( E i ). To define these components, particularly the information loss error, we consider how the uncertainties combine.
For a pair of probes A and B, the measurement error component ( E m A B ), representing the statistical uncertainty inherent in the observation process, is defined as the quadrature sum of their individual reported standard deviations:
E m A B = σ A 2 + σ B 2
Consequently, the variance due to the measurement error is ( E m A B ) 2 = σ A 2 + σ B 2 . The total uncertainty ( E t o t a l A B ), which includes the effect of inflation factors ( λ A and λ B ) that account for the potential underestimation of the reported uncertainties (including information loss), is given by:
E t o t a l A B = λ A σ A 2 + λ B σ B 2
Thus, the total variance is ( E t o t a l A B ) 2 = λ A σ A 2 + λ B σ B 2 .
We can now rigorously define the information loss error ( E i A B ) as the component that, when added in quadrature to the measurement error, yields the total uncertainty. This implies that the variance due to information loss is the difference between the total variance and the measurement variance. This approach is consistent with the principle that independent sources of variance add linearly:
( E i A B ) 2 = ( E t o t a l A B ) 2 ( E m A B ) 2 = ( λ A σ A 2 + λ B σ B 2 ) ( σ A 2 + σ B 2 )
= ( λ A 1 ) σ A 2 + ( λ B 1 ) σ B 2
Therefore, the standard deviation of the information loss error is:
E i A B = ( λ A 1 ) σ A 2 + ( λ B 1 ) σ B 2
This formulation ensures that the error components (measurement and information loss) add in quadrature to form the total uncertainty, reflecting their independent contributions to the overall variance:
( E t o t a l A B ) 2 = ( E m A B ) 2 + ( E i A B ) 2

2.5. Information-Theoretic Interpretation

From an information-theoretic perspective, the separation of E i can be rigorously justified by considering the reduction in information content when a complex multidimensional posterior distribution is simplified. The Kullback–Leibler (KL) divergence is defined as:
E i 2 D K L p ( θ | D ) p ( H 0 | D ) p ( θ ¬ H 0 | D )
This equation establishes an information-theoretic basis for quantifying E i , the information loss component. The Kullback–Leibler (KL) divergence, D KL ( P | | Q ) , is a standard non-symmetric measure of the difference between two probability distributions P and Q. In this context, it precisely quantifies the “information lost” (or the increase in uncertainty) when the true, potentially correlated, joint posterior distribution p ( θ | D ) is approximated by a factorized distribution p ( H 0 | D ) p ( θ ¬ H 0 | D ) , which implicitly assumes independence between H 0 and the other marginalized parameters θ ¬ H 0 . The proportionality of E i 2 (the variance component of information loss) to the KL divergence directly formalizes how the discrepancy arising from such an approximation translates into an effective increase in variance.
This concept is closely related to the Fisher information. Although the Fisher Information Matrix (FIM) represents the maximum possible information about the parameters contained in the data, the act of marginalization effectively reduces the “effective” information available for the parameter of interest. In a Bayesian context, this loss manifests itself as an increase in the marginal variance of H 0 compared to its conditional variance (where other parameters are fixed). This increase in variance is precisely what our information loss component E i aims to capture. The Cramer–Rao bound, which states that the variance of an unbiased estimator is bounded below by the inverse of the Fisher information, implies that any loss of information will lead to a larger achievable variance for the parameter estimate.
In cosmology, where parameters are often highly correlated (e.g., between H 0 and Ω m , or between different dark energy parameters), ignoring these correlations during the projection from a high-dimensional parameter space to a single parameter like H 0 leads to an underestimation of the true uncertainty or, more accurately, a misrepresentation of the information content. The variance inflation factors ( λ i ) introduced in our model serve as a direct proxy for this information loss, quantifying how much the variance of H 0 expands when considering the full parameter space and its correlations. Thus, E i provides a formal measure of the cost incurred by projecting complex cosmological models onto a simplified parameter space, a cost that becomes increasingly relevant in the era of precision cosmology, where even subtle biases can impact the interpretation of fundamental constants.

3. The Total Constraining Information

In statistical analysis, information loss is a critical concept that refers to the reduction in the ability to make precise inferences about parameters of interest due to various factors in the data collection, processing, or analysis stages. In cosmology, information loss is a critical concern because of the limited observational data available and the complexity of cosmological models. For instance, in the analysis of the Cosmic Microwave Background (CMB), the compression of full-sky maps into power spectra results in some information loss, particularly regarding non-Gaussian features. A concept related to information loss (IL) is total constraining information (TCI), which refers to the total amount of information available in a dataset that can be used to constrain or determine model parameters. The concept is closely related to the Fisher information matrix, with the determinant of the Fisher matrix serving as a measure of total constraining information. Mathematically, it can be understood as the volume of parameter space excluded by observations. The greater the constraining information, the smaller the allowed volume in the parameter space. The Fisher matrix represents the total information theoretically available:
F i j = 2 ln ( L ) θ i θ j
where L is the likelihood function and θ i are the cosmological parameters. We modify the Fisher matrix to include loss information effects:
F i j = α i j · F i j
This equation introduces the concept of information loss directly within the Fisher Information Matrix (FIM) framework. Here, α i j are the loss coefficients, which are elements of a matrix α ( 0 < α i j 1 ). They represent the fraction of information effectively extracted or preserved from the observations for a given pair of parameters ( i , j ) . The choice of direct multiplication, where each element F i j of the original FIM is scaled by a corresponding α i j , allows for a precise, element-wise attenuation of information. This functional form directly models the idea that not all information contained within the theoretical FIM might be realized or extracted from observational data. These loss coefficients are inversely related to the variance inflation factors ( λ ) introduced in Section 2.3. The physical reason for this inverse relationship stems from the fundamental principle that information and uncertainty (variance) are inversely proportional in statistical estimation. If α i j quantifies the fraction of information retained, then a factor of 1 / α i j naturally represents the inflation of variance due to this information reduction. Specifically, for a single parameter, λ = 1 / α , indicating that a reduction in information (smaller α ) leads to a proportional increase in uncertainty (larger λ ).
The TCI is a fundamental concept in our analysis, defined as the logarithm of the determinant of the Fisher matrix. With the introduction of loss coefficients, the TCI is expressed as:
TCI = ln | F | = ln | α F |
where α is the matrix of loss coefficients and F is the original Fisher matrix. If the loss coefficients α are assumed to primarily affect the diagonal elements of the Fisher matrix, or if α is a diagonal matrix where its non-zero elements are the α i i loss coefficients applied to the corresponding diagonal elements of F , the expression can be approximated as:
TCI i ln ( α i i ) + ln | F |
Here, ln | F | represents the “ideal” or theoretical TCI without considering information loss. The term i ln ( α i i ) represents the reduction in TCI due to information loss, specifically from the diagonal components. As 0 < α i i 1 , the term i ln ( α i i ) is always non-positive, effectively reducing the TCI compared to the ideal case. We clarify that this decomposition is strictly valid if the scaling matrix α is diagonal, which is a common simplification when modeling total information, or if the off-diagonal effects on the determinant are negligible for practical purposes. Our MCMC framework directly estimates the α matrix elements, allowing for more general cases.
Relating cosmological parameters to information loss coefficients is a crucial aspect of this analysis. The starting point is the Fisher Information Matrix (FIM). For cosmological parameters θ , the FIM elements are given by:
F i j = 2 L θ i θ j
where L is the log-likelihood of the data given the parameters. The inverse of the FIM provides a lower bound on the covariance matrix of the parameter estimates (Cramer–Rao bound). This gives us an idea of the best possible constraints on cosmological parameters. For a combined analysis, we construct a total Fisher matrix:
F t o t a l = F C M B + F B A O + F S N + F H ( z ) + = F i ,
where each F i is the Fisher matrix for the respective probe. We can now introduce loss coefficients for each probe and parameter:
F t o t a l = α C M B F C M B + α B A O F B A O + α S N F S N + α H ( z ) F H ( z ) +
where ∘ denotes the Hadamard product (element-wise). This element-wise multiplication is specifically chosen because the matrices α i are designed to act as “attenuation masks” on the original Fisher matrices. Each element α i j within α i directly scales the corresponding i j -th element of the Fisher matrix F i , allowing for a granular, heterogeneous reduction of information across different parameter correlations. This contrasts with standard matrix multiplication, which would imply a linear transformation of the parameter space, and is not what our model intends to represent for information loss.
This approach allows us to maximize the information content from diverse observational probes while quantifying and accounting for potential information loss. By carefully modeling the interplay between different datasets and their associated systematics, we can obtain robust constraints on cosmological parameters and gain insights into the nature of information degradation in cosmology. In particular, the Hubble tension refers to the discrepancy between measurements of the Hubble constant ( H 0 ) derived from early Universe probes and those from late Universe probes. Understanding this division is crucial for analyzing the tension and applying our Fisher matrix approach with information loss coefficients.

3.1. Generation of Information Loss Coefficients

The likelihood function is modified to include the information loss coefficients and the priors are defined for α i j from the beta distributions, Beta(a,b), where a and b are hyperparameters controlling the shape of the distribution. The coefficients α i j represent the fraction of theoretical information effectively extracted from the observations. They range from 0 to 1, where 1 means no information loss, and 0 means total information loss. We use a Markov chain Monte Carlo (MCMC) algorithm (e.g., Metropolis–Hastings or Hamiltonian Monte Carlo) to estimate the α i j values. The MCMC explores the parameter space, including both cosmological parameters and α i j .
The sampling process follows two basic steps. First, we start with initial values for α i j (can be prior values or arbitrary values between 0 and 1). Then, at each MCMC iteration:
  • Propose new values for α i j .
  • Calculate the modified Fisher matrix F i j = α i j · F i j .
  • Calculate the likelihood using this modified matrix.
  • Accept or reject new values according to the MCMC acceptance ratio.
It is important to note that the likelihood function is modified to include α i j :
L ( θ , α | data ) exp 0.5 ( θ θ f i d ) T F ( θ θ f i d ) · p ( α )
where θ are the cosmological parameters, θ f i d are fiducial values, F is the modified Fisher matrix, and p ( α ) is the prior to α i j . After MCMC, we analyze the posterior distribution of α i j and calculate means, medians, and confidence intervals for each α i j .

3.2. Separating Measurement Error and Information Loss in Hubble Tension

We propose a Bayesian hierarchical model to separate the contributions of measurement error and information loss to the observed Hubble tension, as defined in Equation (1). For the purpose of illustrating our methodology, in this work we focus on the well-known tension between the CMB and SNIa measurements of H 0 . Future work will extend this framework to incorporate a broader range of cosmological datasets.
The variance inflation factors λ C M B and λ S N I a are defined as:
λ C M B = V a r ( H 0 , C M B ) t o t a l V a r ( H 0 , C M B ) m a r g i n a l
λ S N I a = V a r ( H 0 , S N I a ) t o t a l V a r ( H 0 , S N I a ) m a r g i n a l
These equations rigorously define the variance inflation factor for CMB and SNIa measurements. The functional form, as a simple ratio of variances, directly quantifies the extent to which the total uncertainty ( Var ( H 0 , SN Ia ) total ) exceeds the uncertainty attributed solely to conventional measurement errors ( Var ( H 0 , SN Ia ) marginal ). This empirical definition captures any additional variance, including that arising from information loss due to parameter marginalization or unmodeled systematic effects, that is not captured by the reported measurement uncertainty. In other words, the variance inflation factors quantify how much the variance of H 0 increases when we account for the full parameter space compared to considering H 0 in isolation.
The information loss component ( E i ) is calculated as the standard deviation of the additional variance not accounted for by measurement errors, consistent with the rigorous separation in Section 2.4:
E i = ( λ C M B 1 ) σ C M B , o b s 2 + ( λ S N I a 1 ) σ S N I a , o b s 2
This equation calculates the standard deviation of the information loss component E i . Its functional form is derived directly from the variance decomposition presented in Section 2.4. The term ( λ 1 ) σ obs 2 represents the additional variance beyond the reported measurement error that is attributable to information loss for each probe. By summing these additional variance contributions for CMB and SNIa and taking the square root, we obtain the combined standard deviation of the information loss. This quadrature sum is appropriate, as it assumes that the information loss contributions from the two distinct probes are independent sources of variance, thereby adding linearly in terms of squared standard deviations.
The real tension T r e a l is then estimated as a parameter within the Bayesian framework, representing the true underlying discrepancy. Consistent with the observational data outlined in Section 1, our analysis uses the following key measurements of the Hubble constant:
  • H 0 , C M B = 67.24 ± 0.35 km/s/Mpc from Planck 2018 data.
  • H 0 , S N I a = 73.17 ± 0.86 km/s/Mpc from the SH0ES (Supernova H 0 for the Equation of State) team.
The combination of these values yields an observed tension of T o b s e r v e d = 5.94 km/s/Mpc. Our aim is to verify how much uncertainty and loss of information contribute to this value.

3.3. Model Specification

Bayesian analysis provides a natural framework for decomposing the Hubble tension, allowing for the explicit incorporation of uncertainties at multiple levels of the problem and direct quantification of the relative contributions from different sources of variance. Our Bayesian hierarchical model formally characterizes the observed tension H 0 between the CMB and SNIa datasets as arising from three fundamental components: the real physical tension ( T r e a l ), measurement error ( E m ), and information loss due to projection from multidimensional parameter spaces ( E i ). The hierarchical structure of the model recognizes that these components are not directly observed but emerge from a generative process involving variance inflation factors for each method ( λ C M B and λ S N I a ).
At the top level of the hierarchy, we model the observed difference between the estimates H 0 as a quantity arising from an underlying generative process. At the intermediate level, this difference is decomposed into components with distinct physical and statistical meanings. At the lower level, the variance inflation factors, which capture the covariance structure of the full parameter spaces, are treated as parameters to be estimated from the data, rather than fixed values. A crucial advantage of this approach is that uncertainty in the estimation of variance inflation factors is automatically propagated to our conclusions about the magnitude of the real tension. Additionally, the Bayesian model allows for a direct interpretation of confidence intervals for the real tension and provides a foundation for formal model comparisons should different structures for tension decomposition be considered.
The complete mathematical formulation of the model begins with the specification of prior distributions for all parameters. These priors represent our knowledge or beliefs about the parameters before observing the specific data in this analysis. The choice of these prior distributions is guided by both theoretical considerations and previous empirical results related to the structure of cosmological parameter spaces.

3.3.1. Priors

The choice of prior distributions is a critical step in Bayesian analysis, as they encode our prior knowledge or assumptions about the parameters before observing the data. We carefully selected our priors to balance weak informativeness with physical consistency:
For the true values of the Hubble constant, H 0 , CMB , true and H 0 , SN Ia , true , we employ Normal distributions (N):
H 0 , CMB , true N ( μ CMB , σ CMB )
H 0 , SN Ia , true N ( μ SN Ia , σ SN Ia )
This choice is standard for continuous parameters that are expected to be centered on a specific value and possess a quantifiable uncertainty. The Normal distribution is maximal entropy given a mean and variance, making it a flexible choice. We set these as weakly informative priors by centering them on the observed values of H 0 (e.g., μ CMB = 67.24 km/s/Mpc) but assigning sufficiently wide standard deviations (e.g., σ CMB much larger than the observed uncertainty) to ensure that the data primarily drives the posterior inference, rather than the prior. This approach allows the MCMC sampler to efficiently explore the parameter space while maintaining physical realism.
  • For the information loss coefficients, α CMB and α SN Ia , we use Beta distributions:
α CMB Beta ( a CMB , b CMB )
α SN Ia Beta ( a SN Ia , b SN Ia )
The Beta distribution is a continuous probability distribution defined on the interval [ 0 , 1 ] , which makes it the natural and most appropriate choice for parameters that represent proportions, fractions, or probabilities. Since the coefficients α represent the “fraction of theoretical information effectively extracted from observations” (i.e., α [ 0 , 1 ] ), the Beta distribution is perfectly suited. Its two positive shape parameters, a and b, provide significant flexibility to model various prior beliefs about the distribution of information loss, ranging from uniform (e.g., Beta (1,1)) to skewed toward higher (e.g., Beta (5,1)) or lower (e.g., Beta (1,5)) values. This flexibility allows us to specify informative but relatively wide priors, as discussed in detail in the Bayesian implementation section (Section 3.5), allowing the data to primarily inform the magnitude of information loss.

3.3.2. Likelihood

The likelihood function is given by:
L ( H t r u e , C M B , H t r u e , S N I a , α C M B , α S N I a | H o b s , C M B , H o b s , S N I a ) = p ( H o b s , C M B , H o b s , S N I a | H t r u e , C M B , H t r u e , S N I a , α C M B , α S N I a )
with the expressions below representing the generative model for our observations, that is, how the observed measurements of H 0 are generated as a function of the true values and their respective uncertainties.
H 0 , C M B , o b s N ( H 0 , C M B , t r u e , σ C M B , o b s 2 / α C M B )
H 0 , S N I a , o b s N ( H 0 , S N I a , t r u e , σ S N I a , o b s 2 / α S N I a )
These equations specify the generative model for the observed CMB and SNIa measurements of H 0 . The use of a Normal (Gaussian) distribution is a standard assumption for modeling observational errors, often justified by the Central Limit Theorem when multiple small error sources contribute to the total uncertainty. The important aspect of its functional form lies in the variance term which explicitly incorporates the information loss coefficients α CMB and α SNIa into the precision of the observed data. Since α is a fraction ( 0 < α 1 ), dividing by it effectively inflates the reported variances σ CMB , obs 2 and σ SNIa , obs 2 . A smaller α (indicating greater information loss) leads to a larger variance, implying a less precise observed measurement. This directly reflects how the “loss of information” manifests in the observational data, making the observed H 0 value effectively less constrained.
Expanding these expressions using probabilistic models, we have the following:
L = 1 2 π ( σ C M B , o b s 2 / α C M B ) exp ( H o b s , C M B H t r u e , C M B ) 2 2 ( σ C M B , o b s 2 / α C M B ) × 1 2 π ( σ o b s , S N I a 2 / α S N I a ) exp ( H o b s , S N I a H t r u e , S N I a ) 2 2 ( σ o b s , S N I a 2 / α S N I a )

3.4. Derived Quantities

T = H 0 , S N I a , o b s H 0 , C M B , o b s
E m = σ C M B , o b s 2 + σ S N I a , o b s 2
E i = ( λ S N I a 1 ) σ S N I a , o b s 2 + ( λ C M B 1 ) σ C M B , o b s 2
T r e a l = H 0 , S N I a , t r u e H 0 , C M B , t r u e

3.5. Bayesian Implementation

The Bayesian analysis was implemented using the Stan probabilistic programming language, which employs a No-U-Turn Sampler (NUTS), a highly efficient variant of Hamiltonian Monte Carlo (HMC). This choice is particularly advantageous for complex, high-dimensional models as it navigates the parameter space effectively, reducing issues like random walk behavior and improving sampling efficiency.
We ran 4 independent MCMC chains, each initialized from different random starting points to ensure robust exploration of the posterior distribution and to facilitate convergence diagnostics. Each chain consisted of 5000 iterations. The first 2000 iterations of each chain were designated as warm-up (or burn-in) and discarded. This warm-up phase allows the sampler to adapt its parameters and converge to the target posterior distribution, ensuring that the subsequent samples are representative. This configuration resulted in a total of 12,000 effective posterior samples (4 chains × (5000 − 2000) samples/chain) used for subsequent inference.
The prior distributions for all the model parameters were carefully specified. For the true Hubble constant values ( H 0 , C M B , t r u e and H 0 , S N I a , t r u e ), we used weakly informative normal priors, centered on the observed values, but with sufficiently wide standard deviations to allow the data to primarily drive the inference. For the information loss coefficients ( α C M B and α S N I a ), Beta priors were used. These were set to be informative but relatively wide, reflecting the expectation that some information loss might occur, but avoiding overly strong assumptions about its magnitude. This choice aligns with the goal of allowing the data to inform the extent of information loss, while respecting the physical bounds of the parameters α (0 to 1). The prior for the real tension component ( T r e a l ) was chosen to be non-informative (e.g., a very wide normal distribution or a flat prior), ensuring that the data primarily drive its posterior distribution.
The convergence of the MCMC chains was rigorously assessed to ensure that the samples accurately represent the target posterior distribution. We monitored the R-hat statistic for all parameters, aiming for values close to 1.0 (typically below 1.01–1.05), which indicates good mixing and convergence across chains. Furthermore, the effective sample size (ESS) was checked to ensure a sufficient number of independent samples for reliable inference, generally targeting ESS values greater than 400–1000 for each parameter. These diagnostics confirmed that the chains had adequately explored the parameter space.
From the converged posterior samples, summary statistics (mean, median, standard deviation) were calculated for all model parameters and derived quantities (such as E m , E i , and T r e a l ). Posterior confidence intervals (e.g., 95% confidence intervals) were calculated directly from the percentiles of the MCMC samples, providing a robust measure of uncertainty for each parameter. The full Stan model code and data used for this analysis are available upon request/in a supplementary repository to ensure reproducibility.

4. Results and Interpretation

Our Bayesian analysis, employing the framework detailed in Section 2 and Section 3, provides a robust decomposition of the observed Hubble tension. The primary objective is to quantify the contributions of the real physical discrepancy, measurement errors, and loss of information to the overall tension. The results strongly indicate that the Hubble tension remains a statistically significant phenomenon even after accounting for these factors.

4.1. Posterior Estimates

  • Real Tension ( T r e a l ): The posterior distribution of T r e a l shows a mean value of 5.94 km/s/Mpc and a median of 5.92 km/s/Mpc. The 95% confidence interval is [3.32, 8.64] km/s/Mpc. As illustrated in Figure 1, this interval clearly excludes zero, indicating that the observed discrepancy is not merely a statistical artifact, but reflects a genuine physical phenomenon. The Bayesian significance further supports this, with 100% of posterior samples for T r e a l greater than zero.
  • Variance Inflation Factors ( λ ): The estimated variance inflation factors are λ C M B = 1.45 (95% CI: [0.81, 2.05]) and λ S N I a = 1.50 (95% CI: [0.85, 2.12]). These values, also visualized in Figure 2, are significantly greater than 1, confirming the presence of additional variance beyond the reported measurement uncertainties. The similarity in the values of λ C M B and λ S N I a suggests that both the CMB and SNIa measurements are affected by comparable levels of information loss or unmodeled systematic uncertainties.
Figure 1. Posterior distribution of T r e a l . The vertical dashed line indicates the mean value, while the dotted vertical lines indicate the 95% confidence interval.
Figure 1. Posterior distribution of T r e a l . The vertical dashed line indicates the mean value, while the dotted vertical lines indicate the 95% confidence interval.
Universe 11 00303 g001
Figure 2. Posterior distribution of λ C M B and λ S N I a . Dashed lines indicate the respective mean values.
Figure 2. Posterior distribution of λ C M B and λ S N I a . Dashed lines indicate the respective mean values.
Universe 11 00303 g002

4.2. Decomposition of Observed Tension

A critical aspect of our analysis is the decomposition of the observed Hubble tension into its constituent components. This decomposition is based on the posterior estimates of our model’s fundamental parameters and their derived quantities. We clarify that the components—Real Tension ( T real ), Measurement Error ( E m ), and Information Loss ( E i )—are derived quantities from the MCMC posterior samples, not directly sampled independent parameters. For each MCMC sample drawn from the joint posterior distribution of our primary parameters ( H 0 , CMB , true , H 0 , SN Ia , true , α CMB , and α SN Ia ), we compute the corresponding values for T real , E m , and E i using the definitions in Section 3.4. This process generates full posterior distributions for each of these derived quantities.
To quantify their relative contributions to the total observed tension variance, we calculate the variance of the posterior distribution of each derived component: Var ( T real ) , Var ( E m ) , and Var ( E i ) . The total variance used for this decomposition is then taken as the sum of these component variances, implicitly treating their contributions as orthogonal for the purpose of this analysis: Var ( T observed ) Var ( T real ) + Var ( E m ) + Var ( E i ) . This decomposition is presented in Figure 3.
  • Real Tension: Approximately 77.78% of the variance of the observed tension is attributed to the real physical discrepancy ( T r e a l ). This is the dominant component, reinforcing the conclusion that the Hubble tension is primarily a genuine astrophysical puzzle.
  • Measurement Error: In total, 12.98% of the variance of the observed tension is accounted for by standard measurement errors ( E m ).
  • Information Loss: The remaining 9.24% of the variance of the observed tension is attributed to information loss ( E i ), arising from model simplifications and projection of the parameter space. This component, while smaller than the real tension, is non-negligible and highlights the importance of accounting for such effects in cosmological analyses.
These findings suggest that the Hubble tension persists even when accounting for measurement errors and information loss. The discrepancy between the CMB and SNIa measurements of H 0 appears to be largely real and cannot be explained solely by measurement uncertainties or information loss. This implies that the Hubble tension may indeed point to new physics beyond the standard cosmological model or to unidentified systematic errors in one or both measurement methods.
This result is complementary to those shown previously. In Figure 1 we present the posterior distribution of real tension ( T r e a l ), clearly showing that T r e a l is significantly non-zero. In Figure 2 we show the posterior distributions of the variation inflation factors ( λ C M B and λ S N I a ). This allows for a visual comparison of the estimated information loss for each probe and confirms that these factors are greater than 1. In Figure 3, we present a pie chart that illustrates the decomposition of observed tension variance, with the percentage contributions of real tension, measurement error, and information loss to the total variance of observed tension.
To gain a deeper understanding of the interplay between these components and the underlying parameters driving this decomposition, we present the corner plot of the simulated posterior distributions in Figure 4. This visualization allows for a qualitative assessment of the marginal distributions of each parameter ( T r e a l , λ C M B , λ S N I a , E i , H 0 , C M B , t r u e , H 0 , S N I a , t r u e ) as well as their joint dependencies.
The diagonal panels of Figure 4 show the marginal probability density functions. For T r e a l , the distribution is clearly centered at a non-zero value, reinforcing the conclusion from Figure 3 that a significant portion of the observed tension stems from a genuine physical phenomenon. In particular, the posterior distributions for the variance inflation factors, λ C M B and λ S N I a , exhibit a distinct left-truncation at 1. This characteristic is consistent with our model’s theoretical premise, where λ 1 implies variance inflation due to information loss and not an overestimation of uncertainties, as discussed in Section 3.2.
The off-diagonal panels in the upper triangle of Figure 4 illustrate the bivariate correlations between the parameters. The use of hexagonal binning effectively visualizes the density of the simulated samples, providing clear insight into regions of high probability density. We observe strong positive correlations between parameters whose relationships are directly defined within the model. Specifically, there is a clear positive correlation between T r e a l and H 0 , S N I a , t r u e , consistent with T r e a l representing the difference between H 0 , S N I a , t r u e and H 0 , C M B , t r u e . Similarly, a strong positive correlation is evident between the variance inflation factors ( λ C M B and λ S N I a ) and the derived information loss component ( E i ), which is a direct consequence of the mathematical formulation of E i ’ as a function of these inflation factors. This strong interdependency is directly aligned with the mathematical formulation where E i is a function of these inflation factors, providing distributional evidence for the 9.24% contribution to information loss highlighted in Figure 3.
In addition, we quantify the contributions of each component to the observed Hubble tension in terms of standard deviations ( σ ). The observed tension between the H 0 values of Planck CMB ( 67.24 ± 0.35 km/s/Mpc) and SH0ES SNIa ( 73.17 ± 0.86 km/s/Mpc) is T o b s e r v e d = 5.94 km/s/Mpc. The combined uncertainty of this observed tension is 0 . 24 2 + 0 . 86 2 0.89 km/s/Mpc. Therefore, the observed Hubble tension corresponds to approximately 6.39 σ .
Building upon the variance decomposition presented previously, we can translate these contributions into their respective magnitudes in terms of sigma:
  • Real Tension: Accounting for 77.78% of the variance, corresponds to 0.7778 × 6.39 σ 5.64 σ of the observed tension.
  • Measurement Error: Contributing 13.98% to the variance, standard measurement errors account for approximately 0.1398 × 6.39 σ 2.39 σ of the observed tension.
  • Information Loss: With 9.24% of the variance attributed to information loss, this component contributes approximately 0.0924 × 6.39 σ 1.94 σ to the observed tension.
This sigma-based quantification reinforces our conclusion that the Hubble tension is predominantly a real physical phenomenon, with a substantial portion of its magnitude stemming from a genuine discrepancy that cannot be fully explained by statistical or information loss effects alone.

5. Discussion

Our analysis provides compelling evidence that the Hubble tension remains significant even after accounting for both measurement errors and information loss due to parameter-space projection. The posterior distribution of the real tension component ( T r e a l ) shows a mean value of 5.94 km/s/Mpc with a confidence interval of 95% of [3.32, 8.64] km/s/Mpc, clearly excluding zero from the range of plausible values. It is important to note that T real represents the discrepancy that persists after rigorously accounting for the uncertainties of the statistical measurement and the inherent loss of information from the marginalization of the parameters within the standard cosmological model framework. Therefore, a significant T real strongly suggests that the observed discrepancy between CMB and SNIa measurements likely reflects a genuine physical phenomenon beyond what can be explained by these statistical and information-processing effects alone. This finding does not preclude the possibility that new physics beyond ΛCDM could ultimately resolve the tension; rather, it provides robust statistical evidence that such a resolution would indeed require going beyond the standard model’s current observational and data analysis interpretations. This finding aligns with the prevailing consensus in the cosmological community that the Hubble tension is a robust discrepancy, often reported at a significance level exceeding 4–5 σ in various independent analyzes [5,6,29,30]. Our work reinforces this by demonstrating that even when explicitly modeling and quantifying potential sources of uncertainty and information degradation, the core tension persists.
A key distinguishing feature of our approach, based on previous Bayesian treatments of cosmological tensions [26,27,28], is the explicit parameterization and decomposition of the observed tension into three distinct components: standard measurement errors, information loss errors arising from model simplifications and parameter-space projection, and the real physical tension. For example, while Feeney et al. [26] applied a Bayesian hierarchical model to clarify tension within the local distance ladder, our framework extends this by specifically isolating and quantifying the contribution of information loss ( E i ) through the introduction of variance inflation factors ( λ ). Our results indicate that approximately 78% of the variance of the observed tension is attributed to the real tension, the measurement error that accounts for 14% and the loss of information for the remaining 9%. The estimated variance inflation factors ( λ C M B 1.45 and λ S N I a 1.50 ) being significantly greater than unity further underscore the importance of accounting for these effects, as they suggest that reported uncertainties might be underestimated or that parameter correlations lead to non-trivial information loss when marginalizing. This figure, therefore, represents the magnitude of the tension remaining after explicitly modeling identifiable error sources; while strongly suggestive of new physics, it is formally an upper bound for direct evidence of such physics, as it inherently absorbs any unquantified systematic effects not accounted for by our λ parameters. This decomposition provides a more nuanced understanding of the tension’s origins, allowing us to confidently assert that the majority of the discrepancy is not an artifact of our analytical pipeline.
Future studies should extend our variance decomposition framework to incorporate a broader observational landscape. Particularly valuable would be the inclusion of probes that sample intermediate redshifts between the recombination epoch probed by the CMB and the relatively local universe explored by SNIa. The addition of such complementary datasets is expected to have a significant impact on the information loss component. By breaking existing parameter degeneracies, these new data sources should reduce the correlations between H 0 and other cosmological parameters. Consequently, our methodology would predict a decrease in the information loss component ( E i ) and a convergence of the variance inflation factors ( λ ) closer to unity. This reduction in information loss would further strengthen the robustness of the real tension, should it persist, by demonstrating that it is not an artifact of unresolved correlations or projection effects. Conversely, if the information loss component were to remain substantial or even increase with the inclusion of more data, it would point toward more complex or yet unidentified systematic issues in the combined dataset analysis. Furthermore, a multiprobe analysis would allow cross-validation of the inflation factors of the estimated variance. If similar values of λ emerge from independent dataset combinations, this would strengthen confidence in our quantification of information loss. In contrast, significant variations in these factors across different probe combinations might indicate probe-specific systematics or modeling assumptions that warrant further investigation.
Although our current results indicate that the Hubble tension remains robust even after accounting for information loss, we cannot exclude the possibility that more complex projection effects, perhaps involving higher-order moments of parameter distributions or nonlinear parameter degeneracies, might emerge in a more diverse dataset combination. The Bayesian hierarchical framework we have developed is well-suited for such extensions, as it naturally accommodates additional complexity through appropriate prior specifications and model comparison tools.
In conclusion, while our analysis provides important insights into the nature of the Hubble tension using two foundational cosmological probes, a definitive assessment of the contribution of information loss to this tension will require a more comprehensive observational foundation. This represents a promising direction for future research that could further illuminate whether the Hubble tension ultimately demands new physics beyond the standard cosmological model, as discussed in recent works [3,4,7].

Author Contributions

A.L.B.R. Ribeiro was specifically responsible for the Conceptualization, Resources, and Supervision of the work. N.M.N.d.R. undertook the tasks of Methodology, Project administration, and Visualization. F.B.S.O.’s contributions include Software development and Validation. Furthermore, all listed authors collectively engaged in critical aspects such as Data curation, Formal analysis, Writing—original draft, and Writing—review and editing for the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CNPq, grant number 316317/2021-7.

Data Availability Statement

This research did not use direct observational data; instead, it employed cosmological parameter values as reported in the published literature.

Acknowledgments

We thank the anonymous reviewers for their helpful contributions to this paper. NMNR thanks the support of PROBOL-UESC.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
H 0 The Hubble Constant
ΛCDMLambda Cold Dark Matter
CMBCosmic Microwave Background
SNIaType Ia supernovae
BAOBaryon Acoustic Oscillations
TRGBTip of the Red Giant Branch
JWSTThe James Webb Space Telescope
KLKullback–Leibler
FIMFisher Information Matrix
ILInformation Loss
TCITotal Constraining information
MCMCMarkov Chain Monte Carlo
N Normal Distributions
NUTSNo-U-Turn
HMCHamiltonian Monte Carlo

References

  1. Freedman, W.L.; Madore, B.F. Progress in direct measurements of the Hubble constant. J. Cosmol. Astropart. Phys. 2023, 11, 050. [Google Scholar] [CrossRef]
  2. Verde, L.; Schöneberg, N.; Gil-Marín, H. A Tale of Many H0. Annu. Rev. Astron. Astrophys. 2024, 62, 287–331. [Google Scholar] [CrossRef]
  3. Hu, J.P.; Wang, F.Y. Hubble tension: The evidence of new physics. Universe 2023, 9, 94. [Google Scholar] [CrossRef]
  4. Perivolaropoulos, L.; Skara, F. Challenges for ΛCDM: An update. New Astron. Rev. 2022, 95, 101659. [Google Scholar]
  5. Riess, A.G.; Yuan, W.; Macri, L.M.; Scolnic, D.; Brout, D.; Casertano, S.; Zheng, W. A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km s−1 Mpc−1 Uncertainty from the Hubble Space Telescope and the SH0ES Team. Astrophys. J. Lett. 2022, 934, L7. [Google Scholar] [CrossRef]
  6. Wang, B.; López-Corredoira, M.; Wei, J. The Hubble tension survey: A statistical analysis of the 2012–2022 measurements. Mon. Not. R. Astron. Soc. 2024, 527, 7692–7700. [Google Scholar] [CrossRef]
  7. Abdalla, E.; Abellán, G.F.; Aboubrahim, A.; Agnello, A.; Akarsu, Ö.; Akrami, Y.; Pettorino, V. Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies. J. High Energy Astrophys. 2022, 34, 49. [Google Scholar] [CrossRef]
  8. Aghanim, N. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 2020, 641, A6. [Google Scholar]
  9. Di Valentino, E.; Mena, O.; Pan, S.; Visinelli, L.; Yang, W.; Melchiorri, A.; Mota, D.F.; Riess, A.G.; Silk, J. In the realm of the Hubble tension—A review of solutions. Class. Quantum Grav. 2021, 38, 153001. [Google Scholar] [CrossRef]
  10. Ivanov, M.M.; Ali-Haïmoud, Y.; Lesgourgues, J. H0 tension or T0 tension? Phys. Rev. D. 2020, 102, 063515. [Google Scholar] [CrossRef]
  11. Pesce, D.W.; Braatz, J.A.; Reid, M.J.; Riess, A.G.; Scolnic, D.; Condon, J.J.; Gao, F.; Henkel, C.; Impellizzeri, C.M.V.; Kuo, C.Y. The Megamaser Cosmology Project. XIII. Combined Hubble Constant Constraints. Astrophys. J. Lett. 2020, 891, L1. [Google Scholar] [CrossRef]
  12. Moresco, M.; Jimenez, R.; Verde, L.; Cimatti, A.; Pozzetti, L.; Maraston, C.; Thomas, D. Constraining the time evolution of dark energy, curvature and neutrino properties with cosmic chronometers. J. Cosmol. Astropart. Phys. 2016, 12, 039. [Google Scholar] [CrossRef]
  13. Foidl, H.; Rindler-Daller, T. A proposal to improve the accuracy of cosmological observables and address the Hubble tension problem. Astron. Astrophys. 2024, 686, A210. [Google Scholar] [CrossRef]
  14. Freedman, W.L. Measurements of the Hubble Constant: Tensions in Perspective. Astrophys. J. 2021, 919, 16. [Google Scholar] [CrossRef]
  15. Freedman, W.L.; Madore, B.F.; Hatt, D.; Hoyt, T.J.; Jang, I.S.; Beaton, R.L.; Burns, C.R.; Lee, M.G.; Monson, A.J.; Neeley, J.R. The Carnegie-Chicago Hubble Program. VIII. An Independent Determination of the Hubble Constant Based on the Tip of the Red Giant Branch. Astrophys. J. 2019, 882, 34. [Google Scholar] [CrossRef]
  16. Jensen, J.B.; Blakeslee, J.P.; Cantiello, M.; Cowles, M.; An, G.S.; Tully, R.B.; Raimondo, G. The TRGB− SBF Project. III. Refining the HST Surface Brightness Fluctuation Distance Scale Calibration with JWST. Astrophys. J. 2019, 987, 87. [Google Scholar] [CrossRef]
  17. Knox, L.; Millea, M. Hubble constant hunter’s guide. Phys. Rev. D 2020, 101, 043533. [Google Scholar] [CrossRef]
  18. Riess, A.G.; Anand, G.S.; Yuan, W.; Casertano, S.; Dolphin, A.; Macri, L.M.; Breuval, L.; Scolnic, D.; Perrin, M.; Anderson, R.I. JWST Observations Reject Unrecognized Crowding of Cepheid Photometry as an Explanation for the Hubble Tension at 8σ Confidence. Astrophys. J. Lett. 2024, 962, L17. [Google Scholar] [CrossRef]
  19. Bernal, J.L.; Verde, L.; Riess, A.G. The trouble with H0. J. Cosmol. Astropart. Phys. 2016, 10, 019. [Google Scholar] [CrossRef]
  20. Jia, J.; Niu, J.; Qiang, D.; Wei, H. Alleviating the Hubble Tension with a Local Void and Transitions of the Absolute Magnitude. Phys. Rev. D 2025, 112, 043507. [Google Scholar] [CrossRef]
  21. Poulin, V.; Smith, T.L.; Karwal, T.; Kamionkowski, M. Early Dark Energy can Resolve the Hubble Tension. Phys. Rev. Lett. 2019, 122, 221301. [Google Scholar] [CrossRef]
  22. Desmond, H.; Jain, B.; Sakstein, J. A local resolution of the Hubble tension: The impact of screened fifth forces on the cosmic distance ladder. Phys. Rev. D 2019, 100, 043537. [Google Scholar] [CrossRef]
  23. Kreisch, C.D.; Cyr-Racine, F.; Doré, O. Neutrino puzzle: Anomalies, interactions, and cosmological tensions. Phys. Rev. D 2020, 101, 123505. [Google Scholar] [CrossRef]
  24. Schöneberg, N.; Murgia, R.; Gariazzo, S.; Nesseris, S.; Nunes, R.C.; Renzi, A.; Vagnozzi, S.; Di Valentino, E. The Hubble tension: A global fit to cosmological data. Phys. Rev. D 2022, 105, 103511. [Google Scholar]
  25. Di Valentino, E.; Said, J.L.; Riess, A.; Pollo, A.; Poulin, V.; Gómez-Valent, A.; Valls-Gabaud, D. The CosmoVerse White Paper: Addressing observational tensions in cosmology with systematics and fundamental physics. Phys. Dark Universe 2025, 49, 101965. [Google Scholar] [CrossRef]
  26. Feeney, S.M.; Mortlock, D.J.; Dalmasso, N. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder. Mon. Not. R. Astron. Soc. 2018, 476, 3861–3882. [Google Scholar] [CrossRef]
  27. Lemos, P.; Lee, E.; Efstathiou, G.; Gratton, S. Model independent H(z) reconstruction using the cosmic inverse distance ladder. Mon. Not. R. Astron. Soc. 2019, 483, 4803–4810. [Google Scholar] [CrossRef]
  28. Handley, W. Curvature tension: Evidence for a closed universe. Phys. Rev. D 2021, 103, L041301. [Google Scholar] [CrossRef]
  29. Liu, G.; Wang, Y.; Zhao, W. Testing the consistency of early and late cosmological parameters with BAO and CMB data. Phys. Lett. B 2024, 854, 138717. [Google Scholar] [CrossRef]
  30. Han, T.; Jin, S.; Zhang, J.; Zhang, X. A comprehensive forecast for cosmological parameter estimation using joint observations of gravitational waves and short γ-ray bursts. Eur. Phys. J. 2024, 84, 663. [Google Scholar] [CrossRef]
Figure 3. Percentage contributions of real tension, measurement error, and information loss to the total variance of the observed tension.
Figure 3. Percentage contributions of real tension, measurement error, and information loss to the total variance of the observed tension.
Universe 11 00303 g003
Figure 4. Corner plot depicting the approximate joint and marginal posterior distributions of the key parameters in our simulated Bayesian hierarchical model for the Hubble tension decomposition.
Figure 4. Corner plot depicting the approximate joint and marginal posterior distributions of the key parameters in our simulated Bayesian hierarchical model for the Hubble tension decomposition.
Universe 11 00303 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

da Rocha, N.M.N.; Ribeiro, A.L.B.; Oliveira, F.B.S. On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss. Universe 2025, 11, 303. https://doi.org/10.3390/universe11090303

AMA Style

da Rocha NMN, Ribeiro ALB, Oliveira FBS. On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss. Universe. 2025; 11(9):303. https://doi.org/10.3390/universe11090303

Chicago/Turabian Style

da Rocha, Nathalia M. N., Andre L. B. Ribeiro, and Francisco B. S. Oliveira. 2025. "On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss" Universe 11, no. 9: 303. https://doi.org/10.3390/universe11090303

APA Style

da Rocha, N. M. N., Ribeiro, A. L. B., & Oliveira, F. B. S. (2025). On the True Significance of the Hubble Tension: A Bayesian Error Decomposition Accounting for Information Loss. Universe, 11(9), 303. https://doi.org/10.3390/universe11090303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop