Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (689)

Search Parameters:
Keywords = Bayesian posterior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3120 KiB  
Article
Bee Swarm Metropolis–Hastings Sampling for Bayesian Inference in the Ginzburg–Landau Equation
by Shucan Xia and Lipu Zhang
Algorithms 2025, 18(8), 476; https://doi.org/10.3390/a18080476 (registering DOI) - 2 Aug 2025
Abstract
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by [...] Read more.
To improve the sampling efficiency of Markov Chain Monte Carlo in complex parameter spaces, this paper proposes an adaptive sampling method that integrates a swarm intelligence mechanism called the BeeSwarm-MH algorithm. The method combines global exploration by scout bees with local exploitation by worker bees. It employs multi-stage perturbation intensities and adaptive step-size tuning to enable efficient posterior sampling. Focusing on Bayesian inference for parameter estimation in the soliton solutions of the two-dimensional complex Ginzburg–Landau equation, we design a dedicated inference framework to systematically compare the performance of BeeSwarm-MH with the classical Metropolis–Hastings algorithm. Experimental results demonstrate that BeeSwarm-MH achieves comparable estimation accuracy while significantly reducing the required number of iterations and total computation time for convergence. Moreover, it exhibits superior global search capabilities and adaptive features, offering a practical approach for efficient Bayesian inference in complex physical models. Full article
18 pages, 2724 KiB  
Article
Uncertainty-Aware Earthquake Forecasting Using a Bayesian Neural Network with Elastic Weight Consolidation
by Changchun Liu, Yuting Li, Huijuan Gao, Lin Feng and Xinqian Wu
Buildings 2025, 15(15), 2718; https://doi.org/10.3390/buildings15152718 (registering DOI) - 1 Aug 2025
Abstract
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting [...] Read more.
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting their effectiveness in real-world scenarios—especially for on-site warnings, where data are limited and time is critical. To address these challenges, we propose a Bayesian neural network (BNN) framework based on Stein variational gradient descent (SVGD). By performing Bayesian inference, we estimate the posterior distribution of the parameters, thus outputting a reliability analysis of the prediction results. In addition, we incorporate a continual learning mechanism based on elastic weight consolidation, allowing the system to adapt quickly without full retraining. Our experiments demonstrate that our SVGD-BNN model significantly outperforms traditional peak displacement (Pd)-based approaches. In a 3 s time window, the Pearson correlation coefficient R increases by 9.2% and the residual standard deviation SD decreases by 24.4% compared to a variational inference (VI)-based BNN. Furthermore, the prediction variance generated by the model can effectively reflect the uncertainty of the prediction results. The continual learning strategy reduces the training time by 133–194 s, enhancing the system’s responsiveness. These features make the proposed framework a promising tool for real-time, reliable, and adaptive EEW—supporting disaster-resilient building design and operation. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

25 pages, 3545 KiB  
Article
Combined Effects of PFAS, Social, and Behavioral Factors on Liver Health
by Akua Marfo and Emmanuel Obeng-Gyasi
Med. Sci. 2025, 13(3), 99; https://doi.org/10.3390/medsci13030099 - 28 Jul 2025
Viewed by 215
Abstract
Background: Environmental exposures, such as per- and polyfluoroalkyl substances (PFAS), in conjunction with social and behavioral factors, can significantly impact liver health. This research investigates the combined effects of PFAS (perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS), alcohol consumption, smoking, income, and education [...] Read more.
Background: Environmental exposures, such as per- and polyfluoroalkyl substances (PFAS), in conjunction with social and behavioral factors, can significantly impact liver health. This research investigates the combined effects of PFAS (perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS), alcohol consumption, smoking, income, and education on liver function among the U.S. population, utilizing data from the 2017–2018 National Health and Nutrition Examination Survey (NHANES). Methods: PFAS concentrations in blood samples were analyzed using online solid-phase extraction combined with liquid chromatography–tandem mass spectrometry (LC-MS/MS), a highly sensitive and specific method for detecting levels of PFAS. Liver function was evaluated using biomarkers such as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), gamma-glutamyltransferase (GGT), total bilirubin, and the fatty liver index (FLI). Descriptive statistics and multivariable linear regression analyses were employed to assess the associations between exposures and liver outcomes. Bayesian Kernel Machine Regression (BKMR) was utilized to explore the nonlinear and interactive effects of these exposures. To determine the relative influence of each factor on liver health, Posterior Inclusion Probabilities (PIPs) were calculated. Results: Linear regression analyses indicated that income and education were inversely associated with several liver injury biomarkers, while alcohol use and smoking demonstrated stronger and more consistent associations. Bayesian Kernel Machine Regression (BKMR) further highlighted alcohol and smoking as the most influential predictors, particularly for GGT and total bilirubin, with posterior inclusion probabilities (PIPs) close to 1.0. In contrast, PFAS showed weaker associations. Regression coefficients were small and largely non-significant, and PIPs were comparatively lower across most liver outcomes. Notably, education had a higher PIP for ALT and GGT than PFAS, suggesting a more protective role in liver health. People with higher education levels tend to live healthier lifestyles, have better access to healthcare, and are generally more aware of health risks. These factors can all help reduce the risk of liver problems. Overall mixture effects demonstrated nonlinear trends, including U-shaped relationships for ALT and GGT, and inverse associations for AST, FLI, and ALP. Conclusion: These findings underscore the importance of considering both environmental and social–behavioral determinants in liver health. While PFAS exposures remain a long-term concern, modifiable lifestyle and structural factors, particularly alcohol, smoking, income, and education, exert more immediate and pronounced effects on hepatic biomarkers in the general population. Full article
Show Figures

Figure 1

28 pages, 835 KiB  
Article
Progressive First-Failure Censoring in Reliability Analysis: Inference for a New Weibull–Pareto Distribution
by Rashad M. EL-Sagheer and Mahmoud M. Ramadan
Mathematics 2025, 13(15), 2377; https://doi.org/10.3390/math13152377 - 24 Jul 2025
Viewed by 158
Abstract
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival [...] Read more.
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival and hazard rate functions, although these estimators do not have explicit closed-form solutions. The Newton–Raphson algorithm is employed for the numerical computation of these estimates. Confidence intervals for the parameters are approximated based on the asymptotic normality of the maximum likelihood estimators. The Fisher information matrix is calculated using the missing information principle, and the delta technique is applied to approximate confidence intervals for the survival and hazard rate functions. Bayesian estimators are developed under squared error, linear exponential, and general entropy loss functions, assuming independent gamma priors. Markov chain Monte Carlo sampling is used to obtain Bayesian point estimates and the highest posterior density credible intervals for the parameters and reliability measures. Finally, the proposed methods are demonstrated through the analysis of a real dataset. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

35 pages, 11039 KiB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 157
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

11 pages, 961 KiB  
Article
Viscous Cosmology in f(Q,Lm) Gravity: Insights from CC, BAO, and GRB Data
by Dheeraj Singh Rana, Sai Swagat Mishra, Aaqid Bhat and Pradyumn Kumar Sahoo
Universe 2025, 11(8), 242; https://doi.org/10.3390/universe11080242 - 23 Jul 2025
Viewed by 202
Abstract
In this article, we investigate the influence of viscosity on the evolution of the cosmos within the framework of the newly proposed f(Q,Lm) gravity. We have considered a linear functional form [...] Read more.
In this article, we investigate the influence of viscosity on the evolution of the cosmos within the framework of the newly proposed f(Q,Lm) gravity. We have considered a linear functional form f(Q,Lm)=αQ+βLm with a bulk viscous coefficient ζ=ζ0+ζ1H for our analysis and obtained exact solutions to the field equations associated with a flat FLRW metric. In addition, we utilized Cosmic Chronometers (CC), CC + BAO, CC + BAO + GRB, and GRB data samples to determine the constrained values of independent parameters in the derived exact solution. The likelihood function and the Markov Chain Monte Carlo (MCMC) sampling technique are combined to yield the posterior probability using Bayesian statistical methods. Furthermore, by comparing our results with the standard cosmological model, we found that our considered model supports the acceleration of the universe in late time. Full article
Show Figures

Figure 1

18 pages, 774 KiB  
Article
Bayesian Inertia Estimation via Parallel MCMC Hammer in Power Systems
by Weidong Zhong, Chun Li, Minghua Chu, Yuanhong Che, Shuyang Zhou, Zhi Wu and Kai Liu
Energies 2025, 18(15), 3905; https://doi.org/10.3390/en18153905 - 22 Jul 2025
Viewed by 141
Abstract
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and [...] Read more.
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and creating significant technical challenges in maintaining operational reliability. This paper addresses these challenges through a novel Bayesian inference framework that synergistically integrates PMU data with an advanced MCMC sampling technique, specifically employing the Affine-Invariant Ensemble Sampler. The proposed methodology establishes a probabilistic estimation paradigm that systematically combines prior engineering knowledge with real-time measurements, while the Affine-Invariant Ensemble Sampler mechanism overcomes high-dimensional computational barriers through its unique ensemble-based exploration strategy featuring stretch moves and parallel walker coordination. The framework’s ability to provide full posterior distributions of inertia parameters, rather than single-point estimates, helps for stability assessment in renewable-dominated grids. Simulation results on the IEEE 39-bus and 68-bus benchmark systems validate the effectiveness and scalability of the proposed method, with inertia estimation errors consistently maintained below 1% across all generators. Moreover, the parallelized implementation of the algorithm significantly outperforms the conventional M-H method in computational efficiency. Specifically, the proposed approach reduces execution time by approximately 52% in the 39-bus system and by 57% in the 68-bus system, demonstrating its suitability for real-time and large-scale power system applications. Full article
Show Figures

Figure 1

26 pages, 2658 KiB  
Article
An Efficient and Accurate Random Forest Node-Splitting Algorithm Based on Dynamic Bayesian Methods
by Jun He, Zhanqi Li and Linzi Yin
Mach. Learn. Knowl. Extr. 2025, 7(3), 70; https://doi.org/10.3390/make7030070 - 21 Jul 2025
Viewed by 241
Abstract
Random Forests are powerful machine learning models widely applied in classification and regression tasks due to their robust predictive performance. Nevertheless, traditional Random Forests face computational challenges during tree construction, particularly in high-dimensional data or on resource-constrained devices. In this paper, a novel [...] Read more.
Random Forests are powerful machine learning models widely applied in classification and regression tasks due to their robust predictive performance. Nevertheless, traditional Random Forests face computational challenges during tree construction, particularly in high-dimensional data or on resource-constrained devices. In this paper, a novel node-splitting algorithm, BayesSplit, is proposed to accelerate decision tree construction via a Bayesian-based impurity estimation framework. BayesSplit treats impurity reduction as a Bernoulli event with Beta-conjugate priors for each split point and incorporates two main strategies. First, Dynamic Posterior Parameter Refinement updates the Beta parameters based on observed impurity reductions in batch iterations. Second, Posterior-Derived Confidence Bounding establishes statistical confidence intervals, efficiently filtering out suboptimal splits. Theoretical analysis demonstrates that BayesSplit converges to optimal splits with high probability, while experimental results show up to a 95% reduction in training time compared to baselines and maintains or exceeds generalization performance. Compared to the state-of-the-art MABSplit, BayesSplit achieves similar accuracy on classification tasks and reduces regression training time by 20–70% with lower MSEs. Furthermore, BayesSplit enhances feature importance stability by up to 40%, making it particularly suitable for deployment in computationally constrained environments. Full article
Show Figures

Figure 1

13 pages, 272 KiB  
Article
Asymptotic Behavior of the Bayes Estimator of a Regression Curve
by Agustín G. Nogales
Mathematics 2025, 13(14), 2319; https://doi.org/10.3390/math13142319 - 21 Jul 2025
Viewed by 138
Abstract
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also [...] Read more.
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also derived. The Bayes estimator of a regression curve is the regression curve with respect to the posterior predictive distribution. The result is general enough to cover discrete and continuous cases, parametric or nonparametric, and no specific supposition is made about the prior distribution. Some examples, two of them of a nonparametric nature, are given to illustrate the main result; one of the nonparametric examples exhibits a situation where the estimation of the regression curve has an optimal solution, although the problem of estimating the density is meaningless. An important role in the demonstration of these results is the establishment of a probability space as an adequate framework to address the problem of estimating regression curves from the Bayesian point of view, putting at our disposal powerful probabilistic tools in that endeavor. Full article
(This article belongs to the Section D1: Probability and Statistics)
18 pages, 1438 KiB  
Article
Maximum Entropy Estimates of Hubble Constant from Planck Measurements
by David P. Knobles and Mark F. Westling
Entropy 2025, 27(7), 760; https://doi.org/10.3390/e27070760 - 16 Jul 2025
Viewed by 1040
Abstract
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter [...] Read more.
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter space. The parameter space included the spectral tilt and amplitude of adiabatic density fluctuations of the early universe and the present-day ratios of dark energy, matter, and baryonic matter density. A statistical temperature was estimated by applying the equipartition theorem, which uniquely specifies a posterior probability distribution. The ME analysis inferred the mean value of the Hubble constant to be about 67 km/sec/Mpc with a conservative standard deviation of approximately 4.4 km/sec/Mpc. Unlike standard Bayesian analyses that incorporate specific noise models, the ME approach treats the model error generically, thereby producing broader, but less assumption-dependent, uncertainty bounds. The inferred ME value lies within 1σ of both early-universe estimates (Planck, Dark Energy Signal Instrument (DESI)) and late-universe measurements (e.g., the Chicago Carnegie Hubble Program (CCHP)) using redshift data collected from the James Webb Space Telescope (JWST). Thus, the ME analysis does not appear to support the existence of the Hubble tension. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

31 pages, 8853 KiB  
Article
Atomistic-Based Fatigue Property Normalization Through Maximum A Posteriori Optimization in Additive Manufacturing
by Mustafa Awd, Lobna Saeed and Frank Walther
Materials 2025, 18(14), 3332; https://doi.org/10.3390/ma18143332 - 15 Jul 2025
Viewed by 337
Abstract
This work presents a multiscale, microstructure-aware framework for predicting fatigue strength distributions in additively manufactured (AM) alloys—specifically, laser powder bed fusion (L-PBF) AlSi10Mg and Ti-6Al-4V—by integrating density functional theory (DFT), instrumented indentation, and Bayesian inference. The methodology leverages principles common to all 3D [...] Read more.
This work presents a multiscale, microstructure-aware framework for predicting fatigue strength distributions in additively manufactured (AM) alloys—specifically, laser powder bed fusion (L-PBF) AlSi10Mg and Ti-6Al-4V—by integrating density functional theory (DFT), instrumented indentation, and Bayesian inference. The methodology leverages principles common to all 3D printing (additive manufacturing) processes: layer-wise material deposition, process-induced defect formation (such as porosity and residual stress), and microstructural tailoring through parameter control, which collectively differentiate AM from conventional manufacturing. By linking DFT-derived cohesive energies with indentation-based modulus measurements and a MAP-based statistical model, we quantify the effect of additive-manufactured microstructural heterogeneity on fatigue performance. Quantitative validation demonstrates that the predicted fatigue strength distributions agree with experimental high-cycle and very-high-cycle fatigue (HCF/VHCF) data, with posterior modes and 95 % credible intervals of σ^fAlSi10Mg=867+8MPa and σ^fTi6Al4V=1159+10MPa, respectively. The resulting Woehler (S–N) curves and Paris crack-growth parameters envelop more than 92 % of the measured coupon data, confirming both accuracy and robustness. Furthermore, global sensitivity analysis reveals that volumetric porosity and residual stress account for over 70 % of the fatigue strength variance, highlighting the central role of process–structure relationships unique to AM. The presented framework thus provides a predictive, physically interpretable, and data-efficient pathway for microstructure-informed fatigue design in additively manufactured metals, and is readily extensible to other AM alloys and process variants. Full article
(This article belongs to the Topic Multi-scale Modeling and Optimisation of Materials)
Show Figures

Figure 1

38 pages, 1738 KiB  
Article
AI-Driven Bayesian Deep Learning for Lung Cancer Prediction: Precision Decision Support in Big Data Health Informatics
by Natalia Amasiadi, Maria Aslani-Gkotzamanidou, Leonidas Theodorakopoulos, Alexandra Theodoropoulou, George A. Krimpas, Christos Merkouris and Aristeidis Karras
BioMedInformatics 2025, 5(3), 39; https://doi.org/10.3390/biomedinformatics5030039 - 9 Jul 2025
Viewed by 602
Abstract
Lung-cancer incidence is projected to rise by 50% by 2035, underscoring the need for accurate yet accessible risk-stratification tools. We trained a Bayesian neural network on 300 annotated chest-CT scans from the public LIDC–IDRI cohort, integrating clinical metadata. Hamiltonian Monte-Carlo sampling (10 000 [...] Read more.
Lung-cancer incidence is projected to rise by 50% by 2035, underscoring the need for accurate yet accessible risk-stratification tools. We trained a Bayesian neural network on 300 annotated chest-CT scans from the public LIDC–IDRI cohort, integrating clinical metadata. Hamiltonian Monte-Carlo sampling (10 000 posterior draws) captured parameter uncertainty; performance was assessed with stratified five-fold cross-validation and on three independent multi-centre cohorts. On the locked internal test set, the model achieved 99.0% accuracy, AUC = 0.990 and macro-F1 = 0.987. External validation across 824 scans yielded a mean AUC of 0.933 and an expected calibration error <0.034, while eliminating false positives for benign nodules and providing voxel-level uncertainty maps. Uncertainty-aware Bayesian deep learning delivers state-of-the-art, well-calibrated lung-cancer risk predictions from a single CT scan, supporting personalised screening intervals and safe deployment in clinical workflows. Full article
Show Figures

Figure 1

18 pages, 359 KiB  
Article
On the Decision-Theoretic Foundations and the Asymptotic Bayes Risk of the Region of Practical Equivalence for Testing Interval Hypotheses
by Riko Kelter
Stats 2025, 8(3), 56; https://doi.org/10.3390/stats8030056 - 8 Jul 2025
Viewed by 151
Abstract
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian [...] Read more.
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian paradigm, two popular approaches exist: The first is the region of practical equivalence (ROPE), which has become increasingly popular in the cognitive sciences. The second is the Bayes factor for interval null hypotheses, which was proposed by Morey et al. One advantage of the ROPE procedure is that, in contrast to the Bayes factor, it is quite robust to the prior specification. However, while the ROPE is conceptually appealing, it lacks a clear decision-theoretic foundation like the Bayes factor. In this paper, a decision-theoretic justification for the ROPE procedure is derived for the first time, which shows that the Bayes risk of a decision rule based on the highest-posterior density interval (HPD) and the ROPE is asymptotically minimized for increasing sample size. To show this, a specific loss function is introduced. This result provides an important decision-theoretic justification for testing the interval hypothesis in the Bayesian approach based on the ROPE and HPD, in particular, when sample size is large. Full article
(This article belongs to the Section Bayesian Methods)
Show Figures

Figure 1

18 pages, 487 KiB  
Article
Variational Bayesian Variable Selection in Logistic Regression Based on Spike-and-Slab Lasso
by Juanjuan Zhang, Weixian Wang, Mingming Yang and Maozai Tian
Mathematics 2025, 13(13), 2205; https://doi.org/10.3390/math13132205 - 6 Jul 2025
Viewed by 350
Abstract
Logistic regression is often used to solve classification problems. This article combines the advantages of Bayesian methods and spike-and-slab Lasso to select variables in high-dimensional logistic regression. The method of introducing a new hidden variable or approximating the lower bound is used to [...] Read more.
Logistic regression is often used to solve classification problems. This article combines the advantages of Bayesian methods and spike-and-slab Lasso to select variables in high-dimensional logistic regression. The method of introducing a new hidden variable or approximating the lower bound is used to solve the problem of logistic functions without conjugate priors. The Laplace distribution in spike-and-slab Lasso is expressed as a hierarchical form of normal distribution and exponential distribution, so that all parameters in the model are posterior distributions that are easy to deal with. Considering the high time cost of parameter estimation and variable selection in high-dimensional models, we use the variational Bayesian algorithm to perform posterior inference on the parameters in the model. From the simulation results, it can be seen that it is an adaptive prior that can perform parameter estimation and variable selection well in high-dimensional logistic regression. From the perspective of algorithm running time, the method proposed in this article also has high computational efficiency in many cases. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

30 pages, 3032 KiB  
Article
A Bayesian Additive Regression Trees Framework for Individualized Causal Effect Estimation
by Lulu He, Lixia Cao, Tonghui Wang, Zhenqi Cao and Xin Shi
Mathematics 2025, 13(13), 2195; https://doi.org/10.3390/math13132195 - 4 Jul 2025
Viewed by 382
Abstract
In causal inference research, accurate estimation of individualized treatment effects (ITEs) is at the core of effective intervention. This paper proposes a dual-structure ITE-estimation model based on Bayesian Additive Regression Trees (BART), which constructs independent BART sub-models for the treatment and control groups, [...] Read more.
In causal inference research, accurate estimation of individualized treatment effects (ITEs) is at the core of effective intervention. This paper proposes a dual-structure ITE-estimation model based on Bayesian Additive Regression Trees (BART), which constructs independent BART sub-models for the treatment and control groups, estimates ITEs using the potential outcome framework and enhances posterior stability and estimation reliability through Markov Chain Monte Carlo (MCMC) sampling. Based on psychological stress questionnaire data from graduate students, the study first integrates BART with the Shapley value method to identify employment pressure as a key driving factor and reveals substantial heterogeneity in ITEs across subgroups. Furthermore, the study constructs an ITE model using a dual-structured BART framework (BART-ITE), where employment pressure is defined as the treatment variable. Experimental results show that the model performs well in terms of credible interval width and ranking ability, demonstrating superior heterogeneity detection and individual-level sorting. External validation using both the Bootstrap method and matching-based pseudo-ITE estimation confirms the robustness of the proposed model. Compared with mainstream meta-learning methods such as S-Learner, X-Learner and Bayesian Causal Forest, the dual-structure BART-ITE model achieves a favorable balance between root mean square error and bias. In summary, it offers clear advantages in capturing ITE heterogeneity and enhancing estimation reliability and individualized decision-making. Full article
(This article belongs to the Special Issue Bayesian Learning and Its Advanced Applications)
Show Figures

Figure 1

Back to TopTop