A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology
Abstract
1. Introduction
2. Theoretical Background
2.1. Cosmology
- Type Ia Supernovae (SNe Ia): Supernovae (SNe) are highly energetic explosions of some stars and play an important role in the fields of astrophysics and cosmology because they have been used as cosmic distance indicators. In particular, SNe Ia are considered standard candles for measuring the geometry and late-time dynamics of the Universe [18]. In fact, between 1998 and 1999, the independent projects High-z Supernova Search Team [12] and Supernova Cosmology Project [13] showed results that suggested an acceleration in the Universe expansion using SNe Ia data. This behavior is now confirmed by several cosmological observations, establishing that the Universe is currently facing an accelerated expansion, which began recently in cosmic terms at a redshift of [19]; SNe Ia data are widely used to test the capability of alternative models to CDM in describing the cosmological background. The sample used by the Supernova Search team consisted of 50 SNe Ia data points between , while the sample of the Supernova Cosmology Project consisted of 60 SNe Ia data points between . Nowadays, the samples of SNe Ia observations have grown in data points and redshift range, with the most recent being the Pantheon sample [20], consisting of 1048 SNe Ia data points between , and the Pantheon+ sample [21], with 1701 SNe Ia data points between .
- Observational Hubble Parameter Data (OHD): Even though SNe Ia data provide consistent evidence about the existence of a transition epoch in cosmic history where the expansion rate of the Universe changes, it is important to highlight that this conclusion is obtained in a model-dependent way [19]. The study of the expansion rate of the Universe in a model-independent way can be carried out through observations of the Hubble parameter. Up to date, the most complete OHD sample was compiled by Magaña et al. [22], which consists of 51 data points in the redshift range of . In this sample, 31 data points are obtained using the Differential Age method [23], while the remaining 20 data points come from baryon acoustic oscillations measurements [22].
- Baryon Acoustic Oscillations (BAOs): BAOs are the footprints of the interactions between baryons and the relativistic plasma in the epoch before recombination (the epoch in the early Universe when electrons and protons combined to form neutral hydrogen) [24]. There is a significant fraction of baryons in the Universe, and the cosmological theory predicts acoustic oscillations in the plasma that left “imprints” at the current time in the power spectrum of non-relativistic matter [25,26]. Many collaborations have provided BAO measurements, like 6dFGS [27], SDSS-MGS [28], BOSS-DR12 [29], and the Dark Energy Spectroscopic Instrument (DESI) [30] to mention a few.
- Cosmic Microwave Background (CMB): Since the discovery of the CMB in 1965 by Penzias and Wilson [31], the different acoustic peaks in the anisotropy power spectrum have become the most robust observational evidence for testing cosmological models. In this sense, the different acoustic peaks provide information about the matter content and curvature of the Universe [32,33], and they have been measured by different satellites like the Wilkinson Microwave Anisotropy Probe (WMAP) [34] and Planck [35].
- Large-Scale Structure (LSS): The LSS is the study of the distribution of the galaxies in the Universe at large scales (larger than the scale of a galaxies group) [36]. At small scales, gravity concentrates particles to give form to gas, then to stars, and finally to galaxies. At large scales, the galaxies also group in different patterns called “the cosmic web”, which is caused by fluctuations in the early Universe. This distribution has been quantified by various surveys, such as the 2-degree Field Galaxy Redshift Survey (2dFGRS) [37] and the Sloan Digital Sky Survey (SDSS) [38].
- Gravitational Lensing (GL): When a background object (the source) is lensed due to the gravitational force of an intervening massive body (the lens), it generates multiple images. Therefore, the light rays emitted from the source will take different paths through space–time at different image positions, arriving at the observer at different times. This time delay depends on the mass distribution in the lensing and along the line of sight, and also on the cosmological parameters. For this data, we can highlight the strong lensing measurements of the lenses in COSMOGRAIL’s Wellspring (H0LiCOW) collaboration [39], which consist of six gravitationally lensed quasars with measured time delays.
2.2. Machine Learning
3. Related Works
4. Research Methodology
- 1.
- Planning the Review: This initial stage involves defining the research questions and objectives that will steer the review. This sets a clear framework for the study.
- 2.
- Executing the Review: Here, the identification and selection of pertinent studies occur, along with the application of filters to ensure data quality and relevance. Information extraction is also pivotal during this phase.
- 3.
- Reporting the Review: Finally, the findings amassed throughout this review process are synthesized and presented cohesively. This phase culminates in the succinct and organized presentation of research outcomes.
4.1. Planning the Review
- RQ1: What ML approaches are most frequently used in the field of cosmology to adjust the free parameters of cosmological models to observational data?
- RQ2: To what extent does ML contribute to the field of fitting cosmological models to observational data, particularly in enhancing the efficiency of Bayesian inference algorithms and other fitting techniques?
- RQ3: What are the existing research gaps in the utilization of ML for fitting cosmological models, and what are the opportunities for future research to address these gaps and enhance our understanding of observational cosmology?
- RQ4: What types of training data are commonly used in ML approaches applied to fitting cosmological models to observational data, and what methods are employed to obtain and process this data?
- IC1: Research articles written in the English language, but may consider articles in other languages if relevant (with access to translation resources).
- IC2: Research works published from 2014 to 2024, selected based on relevance to the research topic.
- IC3: Articles published in conference/workshop proceedings, academic journals, and as thesis dissertations to encompass diverse scholarly sources.
- IC4: Complete (full-text) research articles to ensure comprehensive review.
- EC1: Exclude duplicate articles, ensuring data integrity.
- EC2: Exclude articles that are not focused on ML techniques applied to improve parameter estimation in cosmology.
- EC3: Exclude articles that are not aligned with the goals of the SLR, such as those describing ML techniques that do not improve Bayesian inference for parameter estimation.
- This SLR followed a pre-specified protocol aligned with PRISMA 2020; the full protocol is publicly archived on Zenodo [72].
4.2. Executing the Review
4.2.1. Exploration and Concluding Selection of Reviewed Materials
4.2.2. Data Extraction Strategy
- Topical Relationship: This theme explores the thematic coherence among the reviewed articles. It is evaluated by the recurrence and relevance of keywords and the thematic correlation of titles, reflecting their collective contribution to the field of cosmology. This theme encompasses the following questions: (i) How often do keywords appear across different articles? (ii) Are the titles indicative of a common thematic focus?
- Databases: This theme investigates the datasets utilized in ML for cosmological model fitting. It examines the types of training data, the methodologies for data collection, and the techniques for processing these data to enhance model accuracy and reliability. This theme encompasses the following questions: (i) What training datasets are prevalent in ML studies for cosmology? (ii) What are the common methods for data acquisition and preprocessing?
- Machine Learning Models: This section delves into the specific models and approaches that the field currently prioritizes, looking for patterns or trends in model selection and application. This theme encompasses the following questions: (i) Which ML models are most commonly referenced in the literature? (ii) Can we identify trends or preferences in the use of certain ML models for cosmological studies?
- Research Objectives: The focus here is on understanding the primary goals of the research articles and how these align with the broader objectives of the field. It also assesses the structure and clarity with which these objectives are presented. This theme encompasses the following questions: (i) What are the primary objectives outlined in the articles? (ii) How are the articles’ methods and results situated within the broader context of cosmological research? (iii) Is there a comparative analysis between the presented research and other studies within the field?
- Year and Type of Publication: This theme catalogs the articles based on their publication year and the medium of publication, which provides insight into the evolution of the field and the dissemination of findings. This theme encompasses the following questions: (i) When were the key articles in the domain published? (ii) Are the articles predominantly from journals, conferences, workshops, or academic theses?
4.3. Reporting the Review
5. Results
5.1. Topical Relationship
5.2. Databases
5.3. Machine Learning Models
5.4. Research Objectives
- 1.
- Some improvements for cosmological parameter estimation are studied in [77], with a 20-times faster enhancement in reproducing the correct contours compared to the MCMC case. Regarding the DL model, a densely connected neural network with three hidden layers, each consisting of 1024 neurons and using ReLU activation functions, was used. This architecture results in a number of trainable parameters on the order of millions due to the full connectivity between layers.
- 2.
- In Ref. [94], an enhancement of times is reported in the computing time in comparison with classical methods for parameter estimation, which accelerates performance, but this is less precise in comparison with the standard MCMC method. In the study, a BNN with the Visual Geometry Group (VGG) architecture was used with a customized calibration method.
- 3.
- 4.
- In Ref. [85], the authors show a reduction in computing times, producing excellent performance in parameter estimation compared with the MCMC for the CDM model. Also, a detailed explanation of the hyperparameters and the steps used to train the model is given. In particular, an NN with three hidden layers (reducing the number of neurons in each layer), together with the ReLU activation function, is used.
- 5.
- The number of executions in the Einstein–Boltzmann solvers for the CMB data is reduced in Ref. [78] in comparison with the standard procedure, which saves computational resources, translating into faster computations and avoiding the bottleneck in the solvers for the CDM model with a massive neutrino model. From an ML point of view, three NNs are used, which are made up of a combination of densely connected layers and convolutional layers. These layers are generally used in image classification tasks, but in this particular case, they are used to reduce the number of neurons instead of densely connecting the whole network. Along with this, the ReLU activation function is used together with Leaky ReLU, a version of ReLU that allows a small amount of negative data to be output.
- 6.
- The authors of Ref. [79] report that the estimation of parameters from the MCMC is more efficient with the solutions provided by an ANN, improving numerical integration in the CDM model; the Chevallier–Polarski–Linder parametric dark energy model; a quintessence model with exponential potential; and the Hu-Sawicki model, estimating that the error is 1% in the region of the parameter space corresponding to 95% confidence for all models.
- 7.
- A new method for parameter estimation that is up to 8 times faster than the standard procedure is presented in Ref. [84].
- 8.
- In Ref. [76], using NN techniques, the authors accelerate the estimation of cosmological parameters, taking 10 h compared with the 5 months required by the standard Boltzmann codes. Interestingly, the values of are similar to the standard computation of the CDM model.
- 9.
- In a similar way as in the above point, in Ref. [91], the authors achieve high precision in the criteria, with a difference of compared with the results obtained by the Cosmic Linear Anisotropy Solving System (CLASS), being up to 2 times faster than this standard procedure.
- 10.
- Finally, in Ref. [83], the authors show deviations for the parameters , , , , , and of , , , , , and , respectively, between the MCMC method and the ML technique for the CDM and CDM models.
- 1.
- Improved parameter estimation with ML techniques was applied to solve the tension in Ref. [96]. In particular, through a BML method, the authors studied a Universe dominated by one fluid with a generalized equation of state.
- 2.
- In a similar way as in the above point, the authors of Ref. [97] apply a BML method in a model with a cosmological constant, baryonic matter, and barotropic dark matter and a model with barotropic dark energy, baryonic matter, and barotropic dark matter.
- 3.
- In Ref. [75], the authors show that the tension can be alleviated using BNN in -modified gravity, specifically in an exponential model.
- 4.
- Finally, in Ref. [98], the authors prove the opacity of the Universe through BNN in the CDM and xCDM models, showing that the Universe is not completely transparent, which also impacts the tension. Regarding the implementation of ML techniques, all scenarios use PyMC3, a probabilistic Python framework that has the necessary tools for applying the ML approach to probabilistic tasks.
5.5. Year and Type of Publication
6. Review Findings and Future Research Directions
6.1. Main Outcomes
- Topical Relationship: As was presented in Section 5.1, the analysis of titles and keywords confirms a clear thematic alignment across the selected studies. The recurring presence of terms such as “cosmology,” “astrophysics,” “parameters,” “neural,” and “Bayesian” illustrates the strong focus on applying machine learning techniques to fundamental cosmological challenges. These challenges predominantly involve the estimation of free parameters and the analysis of nongalactic observational data. Rather than revealing broad interdisciplinary diffusion, the thematic patterns suggest that the application of ML in cosmology remains largely grounded in the physics and astrophysics domains. This concentration highlights both the relevance and the early stage of this interdisciplinary field, where ML methods are still being explored and adapted to address domain-specific problems. The prominence of terms related to data analysis and inference further reinforces the conclusion that ML is primarily being employed to enhance the efficiency and accuracy of traditional model-fitting techniques within established cosmological frameworks.
- Databases: Following Section 5.2, the main datasets used in the reviewed articles are as follows: SNe Ia, OHD, BAO, CMB, LSS, GL, and GCD. For the twenty-seven articles considered in the revision, it can be noted that the most used databases are CMB at , BAO at , and SNe Ia and OHD at . Meanwhile, GL, LSS, and GCD are used less than . It is interesting to note that CMB is the most used dataset since it contains the most expensive data at the computational level. From the data samples, it can be seen that Planck 2018 (), Pantheon (), and Cosmic Chronometers () are the most used in CMB, SNe Ia, and OHD, respectively.
- Machine Learning Models: In general, for both ML and DL models, it was found in Section 5.3 that the majority of the reviewed articles uses NN (). The other encountered models do not exceed usage. Moreover, when considering the technique type alone, DL significantly surpasses ML ( DL compared to ML). This trend is probably due to the large amount of data and the number of parameters to be processed, where DL techniques are often better suited due to their capacity to learn patterns at greater depth. This observation is further reinforced by the discovery that CMB, a rather large and complex database, is primarily handled using NNs.
- Research Objectives: A remarkable result of our SLR is presented in Section 5.4, in which of the papers are focused on improving parameter estimation and are focused on the applications of improved cosmological constraints through ML techniques to solve cosmological problems. In this line, the most studied problem is the tension in different cosmological models. On the other hand, in the improvements, there are more varied results, such as enhancements in convergence, accelerations in the performance of inferences, and the more efficient solutions of equations using ML techniques, but these always occur with less precision compared to the MCMC method. Lower values of precision are achieved for [91] in CDM, and small deviations were observed in for some cosmological parameters [83]. In CDM and CDM, concerning the focus on different databases, it is important to consider that, from CMB, of the articles are related to the improvement of parameter estimation, which is expected as CMB is the most expensive data at the computational level. In the case of OHD and SNe IA, with both exhibiting , they are focused on improvement, and this was also the case for BAO in of the papers. On the other hand, it is interesting that NN is only used in the improvement of cosmological constraints, and BML is only used in applications. In this line, NN is widely used in the reviewed papers for all databases exhibiting for the CMB, for BAO, for OHD, and for GCD and LSS, which results in the use of different catalogs 31 times for the databases mentioned before. Meanwhile, the second most used technique is GP, which was only applied 6 times for all databases.
- Year and Type of Publication: Finally, from Section 5.5, we can see that recent years show an increase in the number of available articles citing the use of ML techniques to improve parameter estimation in cosmology or citing applications to cosmological problems, such as of the reviewed papers published in 2022 and in 2023. On the other hand, of the articles are published in prestigious journals such as JCAP and PRD, which are Q1 journals in the area of physics with a high impact factor. In this line, it is important to note that of the reviewed papers are available in the online arXiv repository, giving insights into the usefulness of this repository in the area of cosmology. We also report the time (in months) between first availability on arXiv and publication in a peer-reviewed venue as a proxy for the pace at which the results are vetted; this helps contextualize the corpus’s reliance on preprints, the maturity of the literature, and the appropriate level of caution when interpreting findings. The updated analysis shows that the mean time is months for JCAP months for PRD, and months for MNRAS. Other journals such as ApJS show a comparable mean time to JCAP, which is also months, while EPJC and Galaxies take 7 and 5 months, respectively. An outlier in the dataset is Symmetry, where one article took 6 years to transition from arXiv to formal publication—likely due to substantial post-submission modifications and topic shifts rather than delays inherent to the journal itself.
6.2. Research Gaps and Recommendations
- (M1)
- Improvement vs. Precision for ML TechniquesVarious ML techniques, including BNN, GP, and DL models, have been applied to cosmological data analysis, showing their potential to improve parameter estimation and to handle large complex datasets. However, a central tension emerges between computational acceleration and the fidelity/calibration of the resulting constraints. In our corpus, methods that deliver the largest speedups sometimes exhibit wider credible regions or miscalibrated posteriors relative to classical baselines, highlighting that efficiency gains do not automatically guarantee precision. The cosmological consequences of using faster but less precise surrogates are concrete: posteriors may be biased, credible regions are under-/over-estimated, and thus, key scientific claims can be distorted—e.g., tensions (such as ) artificially inflated or masked, Bayes factors and model selection misreported, and cross-probe consistency (CMB/BAO/SNe/LSS) mischaracterized.There are, nevertheless, regimes where prioritizing speed is justified: when repeated forward evaluations dominate wall-time (e.g., emulating Einstein–Boltzmann pipelines inside MCMC), during rapid exploratory scans in high-dimensional spaces, or for triage/operational tasks. By contrast, precision must take precedence for final parameter estimations intended for publication, tension quantification across probes, and model comparison—settings where coverage and bias directly affect scientific validity. In practice, speed-first surrogates (often NN-based) are valuable for amortizing computation, provided that they are accompanied by explicit uncertainty calibration and validation against classical pipelines. BNN/GP approaches, while costlier, tend to offer stronger uncertainty calibration when their assumptions hold.Recommendations: We recommend reporting both efficiency (speedup ×, wall-time, ESS/s) and calibration/accuracy (coverage, bias, posterior width, and posterior predictive checks/PIT) to make the trade-off explicit; validating surrogates against Einstein–Boltzmann solvers or exact likelihoods at checkpoints; adopting physics-aware inductive biases where possible (e.g., spherical/equivariant layers, operator-learning surrogates); and using hybrid pipelines that combine speed-first emulation with precision-first verification (e.g., periodic exact re-evaluation, proposal preconditioning, stress tests under distribution shift). These practices mitigate the risk that acceleration comes at the expense of reliable cosmological constraints.
- (M2)
- Hybrid Approaches: Combining Machine Learning and MCMC MethodsHybrid methodologies emerge from the reviewed papers that combine ML techniques with the traditional MCMC method. One combination is the use of ML techniques to solve differential equations. For example, in Refs. [79,84], NNs are employed to solve the cosmological background equations, while in Ref. [89], NNs are used to solve the Einstein–Boltzmann equations. In both cases, the solutions are used as input for the MCMC method, accelerating the computation time and maintaining precision. These examples demonstrate the versatility of neural networks in efficiently handling the computationally expensive components of cosmological modeling, paving the way for their integration into traditional inference workflows. This highlights the importance of not only focusing on improving the MCMC method but also on investing resources in optimizing the most time-consuming aspects of the procedure involving CMB data, i.e., the development of more efficient Einstein–Boltzmann solvers.Another combination is using the results of ML techniques as a prior as in Refs. [94,95]. In these works, the results obtained from BNNs are used as prior information/input for the MCMC method, accelerating the computations with similar precision as in the classical implementation of MCMC. This approach is particularly effective in reducing the dimensionality of the parameter space, allowing MCMC methods to focus on fine-tuning within a more constrained region. Such integration not only reduces computational overhead but also enhances the stability of the inference process in high-dimensional scenarios.Despite these advances, the adoption of hybrid methodologies is not without challenges. Ensuring compatibility between ML-generated outputs and MCMC implementations requires careful validation, especially when physical constraints must be preserved. Additionally, the interpretability of ML-based priors remains an area of concern, as it can obscure the underlying assumptions driving the parameter inference. Addressing these challenges will be critical to ensuring the robustness and reliability of hybrid approaches.Based on the above, a recommendation is to explore these hybrid methodologies since they accelerate the computations and give us uncertainties similar to the traditional MCMC method. Future research should focus on standardizing frameworks for integrating ML techniques with MCMC, establishing benchmarks to compare hybrid and traditional methods, and exploring the potential of emerging ML techniques such as physics-informed neural networks (PINNs) to further optimize cosmological computations. These efforts will help realize the full potential of hybrid methods in advancing precision cosmology.Hybrid methodologies that combine ML techniques with MCMC have demonstrated their potential to enhance cosmological computations by balancing precision and efficiency. For instance, NNs can be used to solve cosmological equations or reduce the dimensionality of parameter spaces before applying MCMC, as described in (M2). These approaches leverage the strengths of DL, as discussed in (M4), particularly in handling high-dimensional data and learning intricate patterns in cosmological datasets. By integrating the scalability and adaptability of DL models into hybrid methodologies, researchers could achieve significant gains in both computational performance and accuracy. These efforts will help realize the full potential of hybrid methods in advancing precision cosmology.These hybrid methods, which balance precision and efficiency, complement the strengths of DL discussed in (M4), particularly in analyzing high-dimensional datasets and addressing complex cosmological problems.
- (M3)
- Inconsistent Reporting Standards in Model TrainingThe evidence gathered from the training phases of different models lacks a consistent standard across studies, leading to variability in reporting. Some papers focus on the theoretical aspects, detailing architectural choices and modifications [81,82,83,90], while others provide comprehensive descriptions of experimental setups, such as the libraries used, programming languages, and environmental contexts [94,96,97,98,99]. A subset of studies presents detailed visualizations of model architectures, including layer interconnections and input–output flows [77,78,88,92], whereas others offer step-by-step guidelines for training procedures [85].Despite these contributions, there remains a notable absence of unified guidelines for documenting the training process, leading to substantial heterogeneity in the level of detail provided. For instance, while some studies excel in presenting architectural visualizations or procedural guidelines, critical aspects such as dataset distribution and preprocessing techniques are often underreported. Without these details, it is difficult to evaluate the representativeness and generalizability of the models. Moreover, studies rarely specify training duration, making it challenging to assess computational efficiency or scalability.However, significant gaps remain in the reporting of critical elements such as dataset distribution (how data is divided into subsets for training, validation, and testing), training duration (the total time spent training a model), computing environments (e.g., the hardware or cloud infrastructure used), and hyperparameters (configurations that control the training process, such as the learning rate, which determines how much the model adjusts its parameters in each iteration, and activation functions, which define how signals are passed between layers in neural networks). Equally crucial are other factors that enhance comparability, such as the precise evaluation metrics used, a thorough description of the training pipeline, and the rationale behind specific parameter choices. The absence of these elements creates significant obstacles for meta-analysis, as variability in reporting undermines the comparability of results across studies. These omissions hinder the comparability of results and the identification of optimal configurations, complicating efforts to determine the most effective models or hyperparameter settings.Furthermore, the lack of comprehensive context makes it challenging to replicate experiments under identical conditions. For example, some studies fail to describe the computing hardware used (e.g., GPUs, CPUs, cloud infrastructure), which significantly impacts training performance and cost-effectiveness. In fields like cosmology, where data sizes and computational demands are substantial, these details are critical for assessing the feasibility of deploying similar models in real-world scenarios. This variability raises questions about the reliability and reproducibility of findings, ultimately limiting their utility in advancing the field.As a result, this lack of standardization highlights the urgent need for a more structured approach to documenting the training phases of machine learning models. Developing and adopting reporting frameworks that ensure transparency and completeness, akin to PRISMA guidelines for systematic reviews, would greatly enhance reproducibility, reliability, and the cumulative progress of machine learning in cosmology.
- (M4)
- Deep Learning vs. Traditional Machine Learning in Cosmological ApplicationsOur review confirms that DL models generally outperform traditional ML methods, particularly when processing large datasets with numerous variables. NNs excel in learning intricate patterns through their deep architectures, offering extensive customization to suit various cosmological applications. The findings reveal a marked preference for DL algorithms across the reviewed studies, suggesting that ML models, while useful for preliminary analysis or feature selection, may struggle with the complexities of cosmological data. Specifically, DL methods such as CNNs and RNNs demonstrate superior performance in tasks requiring spatial and temporal pattern recognition, such as analyzing sky surveys and time-series data from telescopes. These strengths make DL particularly advantageous for applications like detecting gravitational waves, mapping dark matter distributions, and estimating cosmological parameters.Additionally, the flexibility of NNs facilitates essential adaptations for cosmological inference, enabling precise parameter estimation and more effective handling of high-dimensional observational data. For example, custom loss functions tailored to cosmological objectives (e.g., minimizing deviations in parameter estimation) and the integration of physical constraints within NN architectures allow for more accurate modeling of astrophysical phenomena. Moreover, the scalability of DL models enables them to handle the exponential growth of data from next-generation surveys, such as the Vera Rubin Observatory and the Euclid mission, where traditional ML methods often falter.Across the surveyed studies, architectural choices are largely adapted from computer science (e.g., CNN/U-Net variants for map-like data, MLPs for emulators), with only limited explicit encoding of cosmological inductive biases. Notable exceptions include spherical/equivariant convolutions for CMB/weak-lensing maps and operator-learning surrogates for Einstein–Boltzmann pipelines. This pattern helps explain why the largest speedups sometimes coexist with calibration issues: Acceleration is prioritized, while symmetry constraints and uncertainty modeling are not always built in. A practical direction is to combine physics-aware architectures with explicit calibration checks (coverage/PIT) and comparisons to classical baselines so that efficiency gains do not compromise constraint reliability.However, the increased computational demands and the risk of overfitting in DL models highlight the importance of establishing clear evaluation metrics and benchmarks to assess their efficiency and reliability in cosmological contexts. Studies should also explore hybrid approaches that combine the strengths of ML and DL—for instance, using ML for feature extraction and DL for parameter inference—to optimize resource utilization while maintaining accuracy.Future research could benefit from clearly defined guidelines for when ML methods are appropriate versus when the added complexity of DL is justified. These guidelines should consider not only the size and complexity of the datasets but also factors such as computational resources, interpretability needs, and the specific objectives of the study. Developing such a framework would enable researchers to make informed decisions, maximizing the scientific impact of their analyses while minimizing resource expenditure.
- (M5)
- Interpretability and Physical Faithfulness (XAI)A recurring concern in precision cosmology is trust in “black-box” models. Beyond aggregate performance, we need evidence that models learn physics-relevant structure rather than survey- or instrument-specific artifacts. In the reviewed literature, XAI is unevenly reported. Useful practices include the following: (i) attribution and saliency analyses on map-like inputs (e.g., integrated gradients, Grad-CAM) with checks that highlighted regions align with physically meaningful features; (ii) stability tests of explanations under small input perturbations and across instruments/surveys to detect spurious correlations; (iii) counterfactual “injection” and ablation tests using simulations (turning on/off specific effects) to verify causal sensitivity to the intended signal; (iv) enforcing inductive biases via symmetry-aware architectures (e.g., spherical/equivariant layers) and operator-learning surrogates; and (v) posterior diagnostics (coverage, posterior predictive checks, and simulation-based calibration) to ensure that uncertainty is not only reported but also calibrated.Recommendations: Report at least one attribution method with stability checks; include cross-survey/domain-shift tests; provide physics “unit tests” via controlled simulation injections; prefer architectures that encode known symmetries; and release code to reproduce explanations and diagnostics alongside classical-baseline comparisons.
- (A1)
- ML Technique Applied to Cosmological ProblemsThe reviewed papers considered two scenarios: the improvement in the algorithm and the application to a cosmological problem. In the first case, an exploration of the algorithm was focused on refining the parameter estimation processes through ML techniques. In the second case, the ML technique was directly applied toward addressing the tension. This tension refers to the discrepancy between the observational technique (such as CMB) and the local measurements of the Hubble constant . The study of this problem was in the context of different cosmological scenarios relative to CDM, such as a Universe dominated by only one fluid with a general barotropic equation [96] or a Universe with barotropic dark energy and dark matter [97]. In the same line, the problem was studied to probe the opacity of the Universe with an xCDM cosmological model, resulting in impacts in the tension problem [98].While the reviewed papers focus primarily on this specific problem, there exists a significant gap in exploring different cosmological issues that could benefit from these ML techniques. For instance, other cosmological problems could be treated with the ML techniques used in the reviewed articles, such as the cosmological constant problem [100] concerning the large discrepancy between the theoretical value of the cosmological constant and the observed value (which is related to dark energy). The ML techniques explored in the reviewed articles could help to test alternative models to CDM that might solve this problem. On the other hand, the coincidence problem [16], related to the fact that energy densities for dark matter and dark energy are of the same order of magnitude at the current time, which can be seen as a fine-tuning problem, could also be explored with ML techniques to test alternative models to explain this issue.Another cosmological issue is the possibility of a warm dark matter component in the Universe [101], an alternative to the cold dark matter considered in CDM; this could leave imprints in the LSS and CMB background that could be analyzed with the ML techniques and be tested in different cosmological scenarios. Finally, to mention a few, phantom dark energy [102], which accelerates the expansion of the Universe and has an unknown origin, could also be tested in different cosmological scenarios with the use of ML techniques.It is important to emphasize that all the aforementioned problems benefit from the use of ML techniques for parameter estimation, thereby helping to alleviate the tensions within the CDM model. Likewise, other problems may also benefit from these parameter estimation results.
- (D1)
- Cosmological Data Used in ML TechniquesTraditional cosmological methods, such as those using SNe Ia, BAO, and CMB data, continue to be fundamental tools in cosmology. These provide insights into the expansion, the energy densities, and the parameters of a specific cosmological model. The incorporation of ML techniques enhanced their robustness and efficiency through their abilities to analyze a large and complex dataset, having the potential to optimize the parameter estimation of alternative cosmological models to CDM. Nevertheless, we identify that a large number of the reviewed works do not use the majority of the available databases and focus their studies on a relatively narrow subset of the available dataset. For instance, several studies use particular datasets, like only CMB, or combine SNe Ia with other datasets such as OHD without incorporating the total available dataset.The use of some of the datasets mentioned before brings some valuable results to cosmology but limits the scope of the analysis and estimation of the cosmological parameters for certain cosmological models. Moreover, this could may cause biases because each dataset has its own uncertainties and observational limitations. In this sense, we identify a gap because any alternative cosmological model, to be considered viable, must be able to describe the total background cosmological data in the same or better way as the CDM model. In the case of CDM, it has been extensively tested and validated with a wide range of observational data, and any alternative model must be consistent, at least at the same level, with the current observational data. This includes not only using a single dataset but also accounting to join all of them for a full analysis. Therefore, we recommend that future studies focus on using larger numbers of datasets such as gravitational waves, CMB, LSS, OHD, and SNe Ia, among others, in order to be fully consistent in the analysis of alternative cosmological models. Nevertheless, we would like to emphasize that this is not a straightforward task due to the complexity of the cosmological model and the specificities of its analysis.
6.3. Technique-Level Comparison and Benchmarking Considerations
6.3.1. Why Cross-Paper Benchmarking Is Not Methodologically Sound
6.3.2. When Each Family of Techniques Tends to Be Preferable
- NN-based accelerators (emulators/surrogates, accelerated profile-likelihood computations) are preferable when wall-time and throughput dominate (e.g., repeated model evaluations), provided that calibration is checked against standard pipelines.
- BNN/GP approaches tend to be preferable when calibrated posteriors and well-characterized uncertainty are the priority, accepting higher computational cost when their assumptions hold.
- Hybrid schemes (e.g., NN emulators within MCMC, or normalizing flows in simulation-based inference) are attractive when both speed and calibration matter, using ML to accelerate expensive steps while retaining principled posterior checks.
6.3.3. Minimal Conditions for a Future Community Benchmark
6.3.4. Practical Implication for This Review
7. Threats to Validity
7.1. External Validity
7.2. Construct Validity
7.3. Internal Validity
7.4. Conclusion Validity
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
MCMC | Markov chain Monte Carlo; |
ML | Machine Learning; |
SLR | Systematic Literature Review; |
DL | Deep Learning; |
CDM | Cold Dark Matter; |
SNe Ia | Type Ia Supernovae; |
SNe | Supernovae; |
OHD | Observational Hubble Parameter Data; |
BAOs | Baryon Acoustic Oscillations; |
CMB | Cosmic Microwave Background; |
WMAP | Wilkinson Microwave Anisotropy Probe; |
LSS | Large Scale Structure; |
2dFGRS | 2-degree Field Galaxy Redshift Survey; |
SDSS | Sloan Digital Sky Survey; |
GL | Gravitational Lensing; |
H0LiCOW | Lenses in COSMOGRAIL’s Wellspring; |
LSST | Legacy Survey of Space and Time; |
SPHEREx | Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer; |
NGRST | Nancy G. Roman Space Telescope; |
DESI | Dark Energy Spectroscopic Instrument; |
PFS6 | Prime Focus Spectrograph; |
NN | Neural Network; |
RNN | Recurrent Neural Network; |
BML | Bayesian Machine Learning; |
GP | Gaussian Processes; |
BDTs | Bayesian Decision Trees; |
BNN | Bayesian Neural Network; |
CNN | Convolutional Neural Network; |
GCD | Galaxy Clustering Data; |
WFIRST | Wide Field Infrared Survey Telescope; |
JACP | Journal of Cosmology and Astroparticle Physics; |
PRD | Physical Review D; |
MNRAS | Monthly Notices of the Royal Astronomical Society; |
ApJS | The Astrophysical Journal Supplement Series; |
EPJC | The European Physical Journal C; |
PINN | Physics-Informed Neural Network. |
Appendix A. List of Selected Papers for Review and Extracted Data
Title 1 | Author(s) | Databases | Catalogs | ML Models | Research Aim | Ref. | Year |
Accelerated Bayesian inference using deep learning | Moss, Adam | CMB | Planck 2015 | NN | Improvement | [92] | 2020 |
Accelerating cosmological inference with Gaussian processes and neural networks—an application to LSST Y1 weak lensing and galaxy clustering | Boruah, Supranta S; Eifler, Tim; Miranda, Vivian; Krishanth, P M Sai | Weak Lensing | LSST | NN | Improvement | [77] | 2022 |
Accelerating MCMC algorithms through Bayesian Deep Networks | Hortua, Hector J.; Volpi, Riccardo; Marinelli, Dimitri; Malago, Luigi | CMB | Simulated | BNN | Improvement | [94] | 2020 |
An analysis of the H 0 tension problem in the Universe with viscous dark fluid | Elizalde, Emilio; Khurshudyan, Martiros; Odintsov, Sergei D.; Myrzakulov, Ratbay | Generated | — | BML | Application | [96] | 2020 |
An approach to cold dark matter deviation and the tension problem by using machine learning | Elizalde, Emilio; Gluza, Janusz; Khurshudyan, Martiros | Generated | — | BML | Application | [97] | 2021 |
Faster Bayesian inference with neural network bundles and new results for models | Chantada, Augusto T. and Landau, Susana J. and Protopapas, Pavlos and Scóccola, Claudia G. and Garraffo, Cecilia | SNe Ia, OHD | WMAP, N/S, N/S, SDSS, 2dF survey | NN | Improvement | [84] | 2023 |
CONNECT: a neural network based framework for emulating cosmological observables and cosmological parameter inference | Nygaard, Andreas; Holm, Emil Brinch; Hannestad, Steen; Tram, Thomas | CMB | Planck 2018 | NN | Improvement | [90] | 2023 |
Constraints on Cosmic Opacity from Bayesian Machine Learning: The hidden side of the tension problem | Elizalde, Emilio; Khurshudyan, Martiros | Generated | — | BML | Application | [98] | 2020 |
CosmicNet. Part I. Physics-driven implementation of neural networks within Einstein-Boltzmann Solvers | Albers, Jasper; Fidler, Christian; Lesgourgues, Julien; Schöneberg, Nils; Torrado, Jesus | CMB, Lensing, BAO | Planck 2018, N/S, N/S | NN | Improvement | [78] | 2019 |
Cosmology-informed neural networks to solve the background dynamics of the Universe | Chantada, Augusto T.; Landau, Susana J.; Protopapas, Pavlos; Scóccola, Claudia G.; Garraffo, Cecilia | SNe Ia, OHD, BAO | Pantheon, Cosmic Chronometers, N/S | NN | Improvement | [79] | 2022 |
Constraints on prospective deviations from the cold dark matter model using a Gaussian process | Khurshudyan, Martiros; Elizalde, Emilio | OHD | WMAP, SDSS, 2dF survey | GP | Improvement | [86] | 2024 |
ECoPANN: A Framework for Estimating Cosmological Parameters Using Artificial Neural Networks | Wang, Guo-Jian; Li, Si-Yao; Xia, Jun-Qing | CMB, SNe Ia, BAO | Simulated, Simulated, Simulated | NN | Improvement | [85] | 2020 |
KiDS-1000 cosmology: machine learning—accelerated constraints on interacting dark energy with CosmoPower | Spurio Mancini, A; Pourtsidou, A | CMB, Weak Lensing | Planck 2018, KiDS-1000 | NN | Improvement | [76] | 2022 |
Late Time Attractors of Some Varying Chaplygin Gas Cosmological Models | Khurshudyan, Martiros; Myrzakulov, Ratbay | Generated | — | BML | Application | [99] | 2021 |
Learn-as-you-go acceleration of cosmological parameter estimates | Aslanyan, Grigor; Easther, Richard; Price, Layne C. | CMB | Planck, WMAP | BDT | Improvement | [93] | 2015 |
Likelihood-free Cosmological Constraints with Artificial Neural Networks: An Application on Hubble Parameters and SNe Ia | Wang, Yu-Chen; Xie, Yuan-Bo; Zhang, Tong-Jie; Huang, Hui-Chao; Zhang, Tingting; Liu, Kun | SNe Ia, OHD | Pantheon, N/S | NN | Improvement | [80] | 2021 |
LINNA: Likelihood Inference Neural Network Accelerator | To, Chun-Hao; Rozo, Eduardo; Krause, Elisabeth; Wu, Hao-Yi; Wechsler, Risa H.; Salcedo, Andrés N. | Dark Energy Survey (DES) | DES (year 1) | NN | Improvement | [88] | 2023 |
Parameter estimation for the cosmic microwave background with Bayesian neural networks | Hortúa, Héctor J.; Volpi, Riccardo; Marinelli, Dimitri; Malagò, Luigi | CMB | Simulated | BNN | Improvement | [95] | 2020 |
Solving the tension in f(T) gravity through Bayesian machine learning | Aljaf, Muhsin; Elizalde, Emilio; Khurshudyan, Martiros; Myrzakulov, Kairat; Zhadyranova, Aliya | Strong Lensing, OHD | H0LiCOW, Cosmic Chronometers | BNN | Application | [75] | 2022 |
A semi-model-independent approach to describe a cosmological database | Mehrabi, Ahmad | SNe Ia, OHD, BAO | Pantheon, Cosmic Chronometers, N/S | NN | Improvement | [81] | 2023 |
A thorough investigation of the prospects of eLISA in addressing the Hubble tension: Fisher forecast, MCMC and Machine Learning | Shah, Rahul; Bhaumik, Arko; Mukherjee, Purba; Pal, Supratik | CMB, BAO, SNe Ia | Planck 2018, 6dFGS, SDSS MGS, BOSS DR12, Pantheon | GP | Application | [82] | 2023 |
CoLFI: Cosmological Likelihood-free Inference with Neural Density Estimators | Wang, Guo-Jian; Cheng, Cheng; Ma, Yin-Zhe; Xia, Jun-Qing; Abebe, Amare; Beesham, Aroonkumar | CMB, SNe Ia | Planck 2015, Pantheon | NN | Improvement | [83] | 2023 |
Fast and effortless computation of profile likelihoods using CONNECT | Nygaard, Andreas; Holm, Emil Brinch; Hannestad, Steen; Tram, Thomas | CMB | Planck 2018 | NN | Improvement | [91] | 2023 |
High-accuracy emulators for observables in CDM, , , and w cosmologies | Bolliet, Boris; Mancini, Alessio Spurio; Hill, J. Colin; Madhavacheril, Mathew; Jense, Hidde T.; Calabrese, Erminia; Dunkley, Jo | CMB, LSS, BAO | Planck 2018, DES (year 1), BOSS DR12 | NN | Improvement | [89] | 2023 |
NAUTILUS: boosting Bayesian importance nested sampling with deep learning | Lange, Johannes U. | Galaxy Clustering data | Halo Connection | NN | Improvement | [73] | 2023 |
Test of artificial neural networks in likelihood-free cosmological constraints: A comparison of information maximizing neural networks and denoising autoencoder | Chen, Jie-Feng; Wang, Yu-Chen; Zhang, Tingting; Zhang, Tong-Jie | OHD | N/S | NN | Improvement | [87] | 2023 |
Estimating Cosmological Constraints from Galaxy Cluster Abundance using Simulation-Based Inference | Reza, Moonzarin; Zhang, Yuanyuan; Nord, Brian; Poh, Jason; Ciprijanovic, Aleksandra; Strigari, Louis | Galaxy Clustering data | Pantheon+, Cosmic Chronometers | NN | Improvement | [74] | 2022 |
1 All studies included in this review adhered to the predefined inclusion criteria. |
Title | Performance 1 |
Accelerated Bayesian inference using deep learning | The model accelerates MCMC convergence, achieving independent samples in just likelihood evaluations. |
Accelerating cosmological inference with Gaussian processes and neural networks—an application to LSST Y1 weak lensing and galaxy clustering | The emulator achieves full MCMC-level accuracy while reducing inference time by over two orders of magnitude. |
Accelerating MCMC algorithms through Bayesian Deep Networks | The Bayesian neural network accelerates MCMC inference by at the cost of slightly increased uncertainties. |
An analysis of the H0 tension problem in the Universe with viscous dark fluid | The Bayesian model puts very tight constraints on the parameters () and successfully resolves the cosmological H0 tension. |
An approach to cold dark matter deviation and the tension problem by using machine learning | Bayesian ML achieved precision on estimates and detected dark matter deviations with significance. |
Faster Bayesian inference with neural network bundles and new results for models | The neural network bundle delivers under error while accelerating Bayesian inference by up to . |
CONNECT: a neural network based framework for emulating cosmological observables and cosmological parameter inference | CONNECT emulates CLASS with neural networks, achieving model evaluations in milliseconds and parameter deviations below . |
Constraints on Cosmic Opacity from Bayesian Machine Learning: The hidden side of the tension problem | Bayesian ML analysis yields tight cosmic opacity constraints—with uncertainties as low as for —shedding light on the tension. |
CosmicNet. Part I. Physics-driven implementation of neural networks within Einstein-Boltzmann Solvers | The neural-network-accelerated perturbation module in CLASS achieves comparable cosmological accuracy while speeding up computations by nearly . |
Cosmology-informed neural networks to solve the background dynamics of the Universe | The cosmology-informed neural network achieves sub-percent error () across the parameter space with high evaluation speeds. |
Constraints on prospective deviations from the cold dark matter model using a Gaussian process | The GP model estimates the Hubble constant at with about uncertainty. |
ECoPANN: A Framework for Estimating Cosmological Parameters Using Artificial Neural Networks | ECoPANN delivers cosmological parameter estimates as accurately as MCMC but in seconds instead of hours. |
KiDS-1000 cosmology: machine learning—accelerated constraints on interacting dark energy with CosmoPower | The CosmoPower neural emulator delivers CLASS-level accuracy while running faster. |
Late Time Attractors of Some Varying Chaplygin Gas Cosmological Models | The Bayesian machine learning model infers cosmological parameters with uncertainty in but fails to resolve the tension or fit high-redshift data. |
Learn-as-you-go acceleration of cosmological parameter estimates | Learn-as-you-go emulation speeds up cosmological parameter estimation by about without sacrificing accuracy. |
Likelihood-free Cosmological Constraints with Artificial Neural Networks: An Application on Hubble Parameters and SNe Ia | The likelihood-free DAE+MAF method estimates cosmological parameters as accurately as traditional MCMC without needing an explicit likelihood. |
LINNA: Likelihood Inference Neural Network Accelerator | LINNA achieves cosmological parameter inference with bias and speedup over brute-force methods. |
Parameter estimation for the cosmic microwave background with Bayesian neural networks | VGG with Flipout delivers the most accurate and fastest CMB parameter estimation. |
Solving the tension in gravity through Bayesian machine learning | Bayesian machine learning precisely constrained cosmological parameters and demonstrated that exponential models resolve the tension. |
A semi-model-independent approach to describe a cosmological database | Reduce el estadístico en puntos para datos de y en puntos para el conjunto Pantheon de SNIa. |
A thorough investigation of the prospects of eLISA in addressing the Hubble tension: Fisher forecast, MCMC and Machine Learning | The machine learning approach (GP) significantly outperforms Fisher and MCMC methods, reducing the Hubble tension by an additional down to . |
CoLFI: Cosmological Likelihood-free Inference with Neural Density Estimators | CoLFI’s Mixture Neural Network matches MCMC precision with fewer simulations. |
Fast and effortless computation of profile likelihoods using CONNECT | CONNECT achieves sub- error in emulation and speeds up profile likelihood computations by –. |
High-accuracy emulators for observables in CDM, , , and w cosmologies | The emulators deliver sub-percent accuracy across cosmological observables with a speedup over traditional Boltzmann codes. |
NAUTILUS: boosting Bayesian importance nested sampling with deep learning | Nautilus cuts likelihood evaluations by up to while maintaining over accuracy in Bayesian evidence estimates. |
Test of artificial neural networks in likelihood-free cosmological constraints: A comparison of information maximizing neural networks and denoising autoencoder | MAF-DAE yields tighter cosmological constraints with minimal information loss compared to MAF-IMNN. |
Estimating Cosmological Constraints from Galaxy Cluster Abundance using Simulation-Based Inference | The SBI inference accurately recovers cosmological parameters with uncertainties comparable to MCMC. |
1 The “Performance” values reported in this table are drawn solely from the conclusions and results presented in each paper. Since each study employs different metrics, evaluation methodologies, and experimental conditions, these figures should not be interpreted as directly comparable; they merely reflect what each author was able to quantify and emphasize in their work. |
1 | Taxonomy Note: Rather than a normative “traditional ML vs. deep learning” dichotomy, we report results via the model family (GP, BML, BDT, NN, and BNN) and—where relevant—via uncertainty handling (deterministic vs. probabilistic), which aligns with our corpus and the speed–calibration comparisons. |
References
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Rubin, D.B. Bayesian Data Analysis; Chapman and Hall/CRC: Boca Raton, FL, USA, 1995. [Google Scholar]
- Lewis, A.; Bridle, S. Cosmological parameters from CMB and other data: A Monte Carlo approach. Phys. Rev. D 2002, 66, 103511. [Google Scholar] [CrossRef]
- Ntampaka, M.; Trac, H.; Sutherland, D.J.; Battaglia, N.; Póczos, B.; Schneider, J. A machine learning approach for dynamical mass measurements of galaxy clusters. Astrophys. J. 2015, 803, 50. [Google Scholar] [CrossRef]
- Jpt, H. Cochrane Handbook for Systematic Reviews of Interventions. 2008. Available online: http://www.cochrane-handbook.org (accessed on 10 March 2025).
- López-Sánchez, M.; Hernández-Ocaña, B.; Chávez-Bosquez, O.; Hernández-Torruco, J. Supervised Deep Learning Techniques for Image Description: A Systematic Review. Entropy 2023, 25, 553. [Google Scholar] [CrossRef]
- Bozkurt, A.; Sharma, R.C. Emergency remote teaching in a time of global crisis due to CoronaVirus pandemic. Asian J. Distance Educ. 2020, 15, i–vi. [Google Scholar]
- Lochner, M.; McEwen, J.D.; Peiris, H.V.; Lahav, O.; Winter, M.K. Photometric supernova classification with machine learning. Astrophys. J. Suppl. Ser. 2016, 225, 31. [Google Scholar] [CrossRef]
- Dieleman, S.; Willett, K.W.; Dambre, J. Rotation-invariant convolutional neural networks for galaxy morphology prediction. Mon. Not. R. Astron. Soc. 2015, 450, 1441–1459. [Google Scholar] [CrossRef]
- Mukhanov, V. Physical Foundations of Cosmology; Cambridge University Press: Oxford, UK, 2005. [Google Scholar] [CrossRef]
- Bertone, G.; Hooper, D. History of dark matter. Rev. Mod. Phys. 2018, 90, 045002. [Google Scholar] [CrossRef]
- Salucci, P. Dark Matter in Galaxies: Evidences and challenges. Found. Phys. 2018, 48, 1517–1537. [Google Scholar] [CrossRef]
- Riess, A.G.; Filippenko, A.V.; Challis, P.; Clocchiatti, A.; Diercks, A.; Garnavich, P.M. Observational evidence from supernovae for an accelerating universe and a cosmological constant. Astron. J. 1998, 116, 1009–1038. [Google Scholar] [CrossRef]
- Perlmutter, S.; Aldering, G.; Goldhaber, G.; Knop, R.A.; Nugent, P.; Castro, P.G.; Deustua, S.; Fabbro, S.; Goobar, A.; Groom, D.E.; et al. Measurements of Ω and Λ from 42 High Redshift Supernovae. Astrophys. J. 1999, 517, 565–586. [Google Scholar] [CrossRef]
- Fischer, A.E. Friedmann’s equation and the creation of the universe. Int. J. Mod. Phys. D 2018, 27, 1847013. [Google Scholar] [CrossRef]
- Workman, R.L.; Burkert, V.D.; Crede, V.; Klempt, E.; Thoma, U.; Tiator, L.; Rabbertz, K. Review of Particle Physics. PTEP 2022, 2022, 083C01. [Google Scholar] [CrossRef]
- Velten, H.E.S.; vom Marttens, R.F.; Zimdahl, W. Aspects of the cosmological “coincidence problem”. Eur. Phys. J. C 2014, 74, 3160. [Google Scholar] [CrossRef]
- Aghanim, N. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 2020, 641, A6, Erratum in Astron. Astrophys. 2021, 652, C4. [Google Scholar] [CrossRef]
- Liu, Z.W.; Roepke, F.K.; Han, Z. Type Ia Supernova Explosions in Binary Systems: A Review. Res. Astron. Astrophys. 2023, 23, 082001. [Google Scholar] [CrossRef]
- Moresco, M.; Pozzetti, L.; Cimatti, A.; Jimenez, R.; Maraston, C.; Verde, L.; Thomas, D.; Citro, A.; Tojeiro, R.; Wilkinson, D. A 6% measurement of the Hubble parameter at z∼0.45: Direct evidence of the epoch of cosmic re-acceleration. JCAP 2016, 05, 014. [Google Scholar] [CrossRef]
- Scolnic, D.M.; Jones, D.O.; Rest, A.; Pan, Y.C.; Chornock, R.; Foley, R.J.; Huber, M.E.; Kessler, R.; Narayan, G.; Riess, A.G.; et al. The Complete Light-curve Sample of Spectroscopically Confirmed SNe Ia from Pan-STARRS1 and Cosmological Constraints from the Combined Pantheon Sample. Astrophys. J. 2018, 859, 101. [Google Scholar] [CrossRef]
- Brout, D.; Scolnic, D.; Popovic, B.; Riess, A.G.; Carr, A.; Zuntz, J.; Wiseman, P. The Pantheon+ Analysis: Cosmological Constraints. Astrophys. J. 2022, 938, 110. [Google Scholar] [CrossRef]
- Magana, J.; Amante, M.H.; Garcia-Aspeitia, M.A.; Motta, V. The Cardassian expansion revisited: Constraints from updated Hubble parameter measurements and type Ia supernova data. Mon. Not. Roy. Astron. Soc. 2018, 476, 1036–1049. [Google Scholar] [CrossRef]
- Jimenez, R.; Loeb, A. Constraining cosmological parameters based on relative galaxy ages. Astrophys. J. 2002, 573, 37–42. [Google Scholar] [CrossRef]
- DESI Collaboration; Abdul-Karim, M.; Aguilar, J.; Ahlen, S.; Alam, S.; Allen, L.; Allende Prieto, C.; Alves, O.; An, A.; Andrade, U.; et al. DESI DR2 Results II: Measurements of Baryon Acoustic Oscillations and Cosmological Constraints. arXiv 2025, arXiv:2503.14738. [Google Scholar] [CrossRef]
- Peebles, P.J.E.; Yu, J.T. Primeval adiabatic perturbation in an expanding universe. Astrophys. J. 1970, 162, 815–836. [Google Scholar] [CrossRef]
- Eisenstein, D.J.; Hu, W. Baryonic features in the matter transfer function. Astrophys. J. 1998, 496, 605. [Google Scholar] [CrossRef]
- Beutler, F.; Blake, C.; Colless, M.; Jones, D.H.; Staveley-Smith, L.; Campbell, L.; Parker, Q.; Saunders, W.; Watson, F. The 6dF Galaxy Survey: Baryon acoustic oscillations and the local Hubble constant: 6dFGS: BAOs and the local Hubble constant. Mon. Not. Roy. Astron. Soc. 2011, 416, 3017–3032. [Google Scholar] [CrossRef]
- Ross, A.J.; Samushia, L.; Howlett, C.; Percival, W.J.; Burden, A.; Manera, M. The clustering of the SDSS DR7 main Galaxy sample—I. A 4 per cent distance measure at z = 0.15. Mon. Not. Roy. Astron. Soc. 2015, 449, 835–847. [Google Scholar] [CrossRef]
- Alam, S.; Ata, M.; Bailey, S.; Beutler, F.; Bizyaev, D.; Blazek, J.A.; Bolton, A.S.; Brownstein, J.R.; Burden, A.; Chuang, C.; et al. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Cosmological analysis of the DR12 galaxy sample. Mon. Not. Roy. Astron. Soc. 2017, 470, 2617–2652. [Google Scholar] [CrossRef]
- Aghamousa, A.; Aguilar, J.; Ahlen, S.; Alam, S.; Allen, L.E.; Prieto, C.A.; Lang, D. The DESI Experiment Part I: Science, Targeting, and Survey Design. arXiv 2016, arXiv:1611.00036. [Google Scholar] [CrossRef]
- Penzias, A.A.; Wilson, R.W. A Measurement of Excess Antenna Temperature at 4080 Mc/s. Astrophys. J. 1965, 142, 419–421. [Google Scholar] [CrossRef]
- Tegmark, M.; Strauss, M.A.; Blanton, M.R.; Abazajian, K.; Dodelson, S.; Sandvik, H. Cosmological parameters from SDSS and WMAP. Phys. Rev. D 2004, 69, 103501. [Google Scholar] [CrossRef]
- Wang, Y.; Mukherjee, P. Robust dark energy constraints from supernovae, galaxy clustering, and three-year wilkinson microwave anisotropy probe observations. Astrophys. J. 2006, 650, 1–6. [Google Scholar] [CrossRef]
- Hinshaw, G.; Larson, D.; Komatsu, E.; Spergel, D.N.; Bennett, C.; Dunkley, J.; Nolta, M.R.; Halpern, M.; Hill, R.S.; Odegard, N.; et al. Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. Astrophys. J. Suppl. Ser. 2013, 208, 19. [Google Scholar] [CrossRef]
- Aghanim, N.; Akrami, Y.; Arroja, F.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Pettorino, V. Planck 2018 results. I. Overview and the cosmological legacy of Planck. Astron. Astrophys. 2020, 641, A1. [Google Scholar] [CrossRef]
- Springel, V.; Frenk, C.S.; White, S.D.M. The large-scale structure of the Universe. Nature 2006, 440, 1137. [Google Scholar] [CrossRef] [PubMed]
- Colless, M.; Dalton, G.; Maddox, S.; Sutherl, W.; Norberg, P.; Cole, S.; Taylor, K. The 2dF Galaxy Redshift Survey: Spectra and redshifts. Mon. Not. Roy. Astron. Soc. 2001, 328, 1039. [Google Scholar] [CrossRef]
- York, D.G.; Adelman, J.; Anderson, J.E., Jr.; Anderson, S.F.; Annis, J.; Bahcall, N.A.; Bakken, J.A.; Barkhouser, R.; Bastian, S.; Berman, E.; et al. The Sloan Digital Sky Survey: Technical Summary. Astron. J. 2000, 120, 1579–1587. [Google Scholar] [CrossRef]
- Wong, K.C.; Suyu, S.H.; Chen, G.C.-F.; Rusu, C.E.; Millon, M.; Sluse, D.; Bonvin, V.; Fassnacht, C.D.; Taubenberger, S.; Auger, M.W.; et al. H0LiCOW – XIII. A 2.4 per cent measurement of H0 from lensed quasars: 5.3σ tension between early- and late-Universe probes. Mon. Not. Roy. Astron. Soc. 2020, 498, 1420–1439. [Google Scholar] [CrossRef]
- Turner, M.S. The Road to Precision Cosmology. Annu. Rev. Nucl. Part. Sci. 2022, 72, 1–35. [Google Scholar] [CrossRef]
- Abdalla, E.; Abellán, G.F.; Aboubrahim, A.; Agnello, A.; Akarsu, Ö.; Akrami, Y.; Pettorino, V. Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies. J. High Energy Astrophys. 2022, 34, 49–211. [Google Scholar] [CrossRef]
- Riess, A.G.; Yuan, W.; Macri, L.M.; Scolnic, D.; Brout, D.; Casertano, S.; Zheng, W. A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km s−1 Mpc−1 Uncertainty from the Hubble Space Telescope and the SH0ES Team. Astrophys. J. Lett. 2022, 934, L7. [Google Scholar] [CrossRef]
- Hogg, D.W.; Foreman-Mackey, D. Data analysis recipes: Using Markov Chain Monte Carlo. Astrophys. J. Suppl. 2018, 236, 11. [Google Scholar] [CrossRef]
- Goodman, J.; Weare, J. Ensemble samplers with affine invariance. Commun. Appl. Math. Comput. Sci. 2010, 5, 65–80. [Google Scholar] [CrossRef]
- Foreman-Mackey, D.; Hogg, D.W.; Lang, D.; Goodman, J. emcee: The MCMC Hammer. Publ. Astron. Soc. Pac. 2013, 125, 306–312. [Google Scholar] [CrossRef]
- Hajian, A. Efficient Cosmological Parameter Estimation with Hamiltonian Monte Carlo. Phys. Rev. D 2007, 75, 083525. [Google Scholar] [CrossRef]
- Ivezić, v.; Kahn, S.M.; Tyson, J.A.; Abel, B.; Acosta, E.; Allsman, R.; Johnson, M.W. LSST: From Science Drivers to Reference Design and Anticipated Data Products. Astrophys. J. 2019, 873, 111. [Google Scholar] [CrossRef]
- Laureijs, R.; Amiaux, J.; Arduini, S.; Auguères, J.-L.; Brinchmann, J.; Cole, R.; Cropper, M.; Dabin, C.; Duvet, L.; Ealet, A.; et al. Euclid Definition Study Report. arXiv 2011, arXiv:1110.3193. [Google Scholar] [CrossRef]
- Doré, O.; Bock, J.; Ashby, M.; Capak, P.; Cooray, A.; de Putter, R.; Eifler, T.; Flagey, N.; Gong, Y.; Habib, S.; et al. Cosmology with the SPHEREX All-Sky Spectral Survey. arXiv 2014, arXiv:1412.4872. [Google Scholar]
- Spergel, D.; Gehrels, N.; Baltay, C.; Bennett, D.; Breckinridge, J.; Donahue, M.; Dressler, A.; Gaudi, B.S.; Greene, T.; Guyon, O.; et al. Wide-Field InfrarRed Survey Telescope-Astrophysics Focused Telescope Assets WFIRST-AFTA 2015 Report. arXiv 2015, arXiv:1503.03757. [Google Scholar]
- Takada, M.; Ellis, R.S.; Chiba, M.; Greene, J.E.; Aihara, H.; Arimoto, N.; Wyse, R. Extragalactic science, cosmology, and Galactic archaeology with the Subaru Prime Focus Spectrograph. Publ. Astron. Soc. Jpn. 2014, 66, R1. [Google Scholar] [CrossRef]
- Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
- Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef] [PubMed]
- Minsky, M.; Papert, S. An introduction to computational geometry. Camb. Tiass. Hit 1969, 479, 480. [Google Scholar]
- Chollet, F. Deep Learning with Python; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
- Aggarwal, C.C. An Introduction to Neural Networks. In Neural Networks and Deep Learning: A Textbook; Springer International Publishing: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
- Bharadiya, J.P. A Review of Bayesian Machine Learning Principles, Methods, and Applications. Int. J. Innov. Sci. Res. Technol. 2023, 8, 2033–2038. [Google Scholar]
- Seeger, M. Gaussian processes for machine learning. Int. J. Neural Syst. 2004, 14, 69–106. [Google Scholar] [CrossRef] [PubMed]
- Nuti, G.; Rugama, L.A.J.; Cross, A.I. A Bayesian Decision Tree Algorithm. arXiv 2019, arXiv:1901.03214. [Google Scholar] [CrossRef]
- Dension, D. A Bayesian CART algorithm. Biometrika 1998, 85, 363–377. [Google Scholar] [CrossRef]
- Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; Wierstra, D. Weight Uncertainty in Neural Networks. arXiv 2015, arXiv:1505.05424. [Google Scholar] [CrossRef]
- de Dios Rojas Olvera, J.; Gómez-Vargas, I.; Vázquez, J.A. Observational cosmology with artificial neural networks. Universe 2022, 8, 120. [Google Scholar] [CrossRef]
- Moriwaki, K.; Nishimichi, T.; Yoshida, N. Machine learning for observational cosmology. Rep. Prog. Phys. 2023, 86, 076901. [Google Scholar] [CrossRef]
- Lahav, O. Deep Machine Learning in Cosmology: Evolution or Revolution? arXiv 2023, arXiv:2302.04324. [Google Scholar] [CrossRef]
- Dvorkin, C.; Mishra-Sharma, S.; Nord, B.; Villar, V.A.; Avestruz, C.; Bechtol, K.; Ćiprijanović, A.; Connolly, A.J.; Garrison, L.H.; Narayan, G.; et al. Machine learning and cosmology. arXiv 2022, arXiv:2203.08056. [Google Scholar] [CrossRef]
- Han, B.; Ding, H.; Zhang, Y.; Zhao, Y. Improving accuracy of Quasars’ photometric redshift estimation by integration of KNN and SVM. Proc. Int. Astron. Union 2015, 11, 209. [Google Scholar] [CrossRef]
- Di Valentino, E.; Levi Said, J.; Riess, A.G.; Pollo, A.; Poulin, V.; CosmoVerse Network. The CosmoVerse White Paper: Addressing observational tensions in cosmology with systematics and fundamental physics. Phys. Dark Universe 2025, 49, 101965. [Google Scholar] [CrossRef]
- Spurio Mancini, A.; Piras, D.; Alsing, J.; Joachimi, B.; Hobson, M.P. CosmoPower: Emulating cosmological power spectra for accelerated Bayesian inference from next-generation surveys. Mon. Not. R. Astron. Soc. 2022, 511, 1771–1788. [Google Scholar] [CrossRef]
- Mootoovaloo, A.; García-García, C.; Alonso, D.; Ruiz-Zapatero, J. emuflow: Normalizing flows for joint cosmological analysis. Mon. Not. R. Astron. Soc. 2025, 536, 190–202. [Google Scholar] [CrossRef]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; 2007; Available online: https://docs.opendeved.net/lib/7RP54LK8 (accessed on 10 March 2025).
- Kitchenham, B. Procedures for performing systematic reviews. Keele UK Keele Univ. 2004, 33, 1–26. [Google Scholar]
- Rojas, L.; Espinoza, S.; González, E.; Maldonado, C.; Luo, F. Protocol for the Systematic Literature Review (PRISMA 2020): ML for Observational Constraints in Cosmology. 2025. Available online: https://zenodo.org/records/16899506 (accessed on 10 March 2025). [CrossRef]
- Lange, J.U. Nautilus: Boosting Bayesian importance nested sampling with deep learning. Mon. Not. Roy. Astron. Soc. 2023, 525, 3181–3194. [Google Scholar] [CrossRef]
- Reza, M.; Zhang, Y.; Nord, B.; Poh, J.; Ciprijanovic, A.; Strigari, L. Estimating Cosmological Constraints from Galaxy Cluster Abundance using Simulation-Based Inference. arXiv 2022, arXiv:2208.00134. [Google Scholar] [CrossRef]
- Aljaf, M.; Elizalde, E.; Khurshudyan, M.; Myrzakulov, K.; Zhadyranova, A. Solving the H0 tension in f(T) gravity through Bayesian machine learning. Eur. Phys. J. C 2022, 82, 1130. [Google Scholar] [CrossRef]
- Mancini Spurio, A.; Pourtsidou, A. KiDS-1000 cosmology: Machine learning – accelerated constraints on interacting dark energy with CosmoPower. Mon. Not. Roy. Astron. Soc. 2022, 512, L44–L48. [Google Scholar] [CrossRef]
- Boruah, S.S.; Eifler, T.; Miranda, V.; Sai Krishanth, P.M. Accelerating cosmological inference with Gaussian processes and neural networks – an application to LSST Y1 weak lensing and galaxy clustering. Mon. Not. Roy. Astron. Soc. 2022, 518, 4818–4831. [Google Scholar] [CrossRef]
- Albers, J.; Fidler, C.; Lesgourgues, J.; Schöneberg, N.; Torrado, J. CosmicNet. Part I. Physics-driven implementation of neural networks within Einstein-Boltzmann Solvers. JCAP 2019, 09, 028. [Google Scholar] [CrossRef]
- Chantada, A.T.; Landau, S.J.; Protopapas, P.; Scóccola, C.G.; Garraffo, C. Cosmology-informed neural networks to solve the background dynamics of the Universe. Phys. Rev. D 2023, 107, 063523. [Google Scholar] [CrossRef]
- Wang, Y.C.; Xie, Y.B.; Zhang, T.J.; Huang, H.C.; Zhang, T.; Liu, K. Likelihood-free Cosmological Constraints with Artificial Neural Networks: An Application on Hubble Parameters and SNe Ia. Astrophys. J. Supp. 2021, 254, 43. [Google Scholar] [CrossRef]
- Mehrabi, A. A semi-model-independent approach to describe a cosmological database. arXiv 2023, arXiv:2301.07369. [Google Scholar]
- Shah, R.; Bhaumik, A.; Mukherjee, P.; Pal, S. A thorough investigation of the prospects of eLISA in addressing the Hubble tension: Fisher forecast, MCMC and Machine Learning. JCAP 2023, 06, 038. [Google Scholar] [CrossRef]
- Wang, G.J.; Cheng, C.; Ma, Y.Z.; Xia, J.Q.; Abebe, A.; Beesham, A. CoLFI: Cosmological Likelihood-free Inference with Neural Density Estimators. Astrophys. J. Suppl. 2023, 268, 7. [Google Scholar] [CrossRef]
- Chantada, A.T.; Landau, S.J.; Protopapas, P.; Scóccola, C.G.; Garraffo, C. Faster Bayesian inference with neural network bundles and new results for f(R) models. Phys. Rev. D 2024, 109, 123514. [Google Scholar] [CrossRef]
- Wang, G.J.; Li, S.Y.; Xia, J.Q. ECoPANN: A Framework for Estimating Cosmological Parameters using Artificial Neural Networks. Astrophys. J. Suppl. 2020, 249, 25. [Google Scholar] [CrossRef]
- Khurshudyan, M.; Elizalde, E. Constraints on Prospective Deviations from the Cold Dark Matter Model Using a Gaussian Process. Galaxies 2024, 12, 31. [Google Scholar] [CrossRef]
- Chen, J.F.; Chen, J.; Wang, Y.C.; Wang, Y.; Zhang, T.; Zhang, T.J.; Zhang, T. Test of artificial neural networks in likelihood-free cosmological constraints: A comparison of information maximizing neural networks and denoising autoencoder. Phys. Rev. D 2023, 107, 063517. [Google Scholar] [CrossRef]
- To, C.H.; Rozo, E.; Krause, E.; Wu, H.Y.; Wechsler, R.H.; Salcedo, A.N. LINNA: Likelihood Inference Neural Network Accelerator. JCAP 2023, 01, 016. [Google Scholar] [CrossRef]
- Bolliet, B.; Spurio Mancini, A.; Hill, J.C.; Madhavacheril, M.; Jense, H.T.; Calabrese, E.; Dunkley, J. High-accuracy emulators for observables in ΛCDM, Neff, Σmν, and w cosmologies. Mon. Not. Roy. Astron. Soc. 2024, 531, 1351–1370. [Google Scholar] [CrossRef]
- Nygaard, A.; Holm, E.B.; Hannestad, S.; Tram, T. CONNECT: A neural network based framework for emulating cosmological observables and cosmological parameter inference. JCAP 2023, 05, 025. [Google Scholar] [CrossRef]
- Nygaard, A.; Holm, E.B.; Hannestad, S.; Tram, T. Fast and effortless computation of profile likelihoods using CONNECT. JCAP 2023, 11, 064. [Google Scholar] [CrossRef]
- Moss, A. Accelerated Bayesian inference using deep learning. Mon. Not. Roy. Astron. Soc. 2020, 496, 328–338. [Google Scholar] [CrossRef]
- Aslanyan, G.; Easther, R.; Price, L.C. Learn-as-you-go acceleration of cosmological parameter estimates. J. Cosmol. Astropart. Phys. 2015, 2015, 005. [Google Scholar] [CrossRef]
- Hortua, H.J.; Volpi, R.; Marinelli, D.; Malago, L. Accelerating MCMC algorithms through Bayesian Deep Networks. arXiv 2020, arXiv:2011.14276. [Google Scholar] [CrossRef]
- Hortua, H.J.; Volpi, R.; Marinelli, D.; Malagò, L. Parameter estimation for the cosmic microwave background with Bayesian neural networks. Phys. Rev. D 2020, 102, 103509. [Google Scholar] [CrossRef]
- Elizalde, E.; Khurshudyan, M.; Odintsov, S.D.; Myrzakulov, R. Analysis of the H0 tension problem in the Universe with viscous dark fluid. Phys. Rev. D 2020, 102, 123501. [Google Scholar] [CrossRef]
- Elizalde, E.; Gluza, J.; Khurshudyan, M. An approach to cold dark matter deviation and the H0 tension problem by using machine learning. arXiv 2021, arXiv:2104.01077. [Google Scholar]
- Elizalde, E.; Khurshudyan, M. Constraints on cosmic opacity from Bayesian machine learning: The hidden side of the H0 tension problem. Phys. Dark Univ. 2022, 37, 101114. [Google Scholar] [CrossRef]
- Khurshudyan, M.; Myrzakulov, R. Late time attractors of some varying Chaplygin gas cosmological models. Symmetry 2021, 13, 769. [Google Scholar] [CrossRef]
- Weinberg, S. The Cosmological Constant Problem. Rev. Mod. Phys. 1989, 61, 1–23. [Google Scholar] [CrossRef]
- Newton, O.; Leo, M.; Cautun, M.; Jenkins, A.; Frenk, C.S.; Lovell, M.R.; Helly, J.C.; Benson, A.J.; Cole, S. Constraints on the properties of warm dark matter using the satellite galaxies of the Milky Way. JCAP 2021, 08, 062. [Google Scholar] [CrossRef]
- Rest, A.; Scolnic, D.; Foley, R.J.; Huber, M.E.; Chornock, R.; Narayan, G.; Tonry, J.L.; Berger, E.; Soderberg, A.M.; Stubb, C.W.; et al. Cosmological Constraints from Measurements of Type Ia Supernovae discovered during the first 1.5 yr of the Pan-STARRS1 Survey. Astrophys. J. 2014, 795, 44. [Google Scholar] [CrossRef]
Source | Search Syntax/String |
---|---|
arXiv | Query: order: -announced_date_first; size: 200; include_cross_list: True; terms: AND abstract=COSMOLOGY OR “DARK ENERGY” OR “COSMOLOGICAL CONSTRAINTS” OR “OBSERVATIONAL CONSTRAINTS”; AND abstract=“MACHINE LEARNING” OR “ARTIFICIAL INTELLIGENCE” OR “DEEP LEARNING” OR “NEURAL NETWORKS” |
ScienceDirect | Title, abstract, keywords: (COSMOLOGY OR “DARK ENERGY” OR “COSMOLOGICAL CONSTRAINTS” OR “OBSERVATIONAL CONSTRAINTS”) AND (“MACHINE LEARNING” OR “ARTIFICIAL INTELLIGENCE” OR “DEEP LEARNING” OR “NEURAL NETWORKS”) |
ACM Digital Library | [[Abstract: cosmology] OR [Abstract: “dark energy”] OR [Abstract: “cosmological constraints”] OR [Abstract: “observational constraints”]] AND [[Abstract: “machine learning”] OR [Abstract: “artificial intelligence”] OR [Abstract: “deep learning”] OR [Abstract: “neural networks”]] |
Scopus | (COSMOLOGY OR “DARK ENERGY” OR “COSMOLOGICAL CONSTRAINTS” OR “OBSERVATIONAL CONSTRAINTS”) AND (“MACHINE LEARNING” OR “ARTIFICIAL INTELLIGENCE” OR “DEEP LEARNING” OR “NEURAL NETWORKS”) |
Inspirehep | t(COSMOLOGY OR “DARK ENERGY” OR “COSMOLOGICAL CONSTRAINTS”) AND t(“MACHINE LEARNING” OR “ARTIFICIAL INTELLIGENCE” OR “DEEP LEARNING” OR “NEURAL NETWORKS”) |
Journal | Number of Papers | Refs |
---|---|---|
Journal of Cosmology and Astroparticle Physics | 6 | [78,82,88,90,91,93] |
Physical Review D | 5 | [79,84,87,95,96] |
Monthly Notices of the Royal Astronomical Society | 5 | [73,76,77,89,92] |
Preprint | 4 | [74,81,94,97] |
The Astrophysical Journal Supplement Series | 3 | [80,83,85] |
Galaxies | 1 | [86] |
Symmetry | 1 | [99] |
The European Physical Journal C | 1 | [75] |
Science Direct | 1 | [98] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rojas, L.; Espinoza, S.; González, E.; Maldonado, C.; Luo, F. A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology. Galaxies 2025, 13, 114. https://doi.org/10.3390/galaxies13050114
Rojas L, Espinoza S, González E, Maldonado C, Luo F. A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology. Galaxies. 2025; 13(5):114. https://doi.org/10.3390/galaxies13050114
Chicago/Turabian StyleRojas, Luis, Sebastián Espinoza, Esteban González, Carlos Maldonado, and Fei Luo. 2025. "A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology" Galaxies 13, no. 5: 114. https://doi.org/10.3390/galaxies13050114
APA StyleRojas, L., Espinoza, S., González, E., Maldonado, C., & Luo, F. (2025). A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology. Galaxies, 13(5), 114. https://doi.org/10.3390/galaxies13050114