Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (167)

Search Parameters:
Keywords = predicted posterior distributions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 882 KiB  
Article
MatBYIB: A MATLAB-Based Toolkit for Parameter Estimation of Eccentric Gravitational Waves from EMRIs
by Genliang Li, Shujie Zhao, Huaike Guo, Jingyu Su and Zhenheng Lin
Universe 2025, 11(8), 259; https://doi.org/10.3390/universe11080259 - 6 Aug 2025
Abstract
Accurate parameter estimation is essential for gravitational wave data analysis. In extreme mass-ratio inspiral binary systems, orbital eccentricity is a critical parameter for parameter estimation. However, the current software for the parameter estimation of the gravitational wave often neglects the direct estimation of [...] Read more.
Accurate parameter estimation is essential for gravitational wave data analysis. In extreme mass-ratio inspiral binary systems, orbital eccentricity is a critical parameter for parameter estimation. However, the current software for the parameter estimation of the gravitational wave often neglects the direct estimation of orbital eccentricity. To fill this gap, we have developed the MatBYIB, a MATLAB-based software (Version 1.0) package for the parameter estimation of the gravitational wave with arbitrary eccentricity. The MatBYIB employs the Analytical Kludge waveform as a computationally efficient signal generator and computes parameter uncertainties via the Fisher Information Matrix and the Markov Chain Monte Carlo. For Bayesian inference, we implement the Metropolis–Hastings algorithm to derive posterior distributions. To guarantee convergence, the Gelman–Rubin convergence criterion (the Potential Scale Reduction Factor R^) is used to determine sampling adequacy, with MatBYIB dynamically increasing the sample size until R^<1.05 for all parameters. Our results demonstrate strong agreement between predictions based on the Fisher Information Matrix and full MCMC sampling. This program is user-friendly and allows for the estimation of the gravitational wave parameters with arbitrary eccentricity on standard personal computers. Full article
Show Figures

Figure 1

24 pages, 3291 KiB  
Article
Machine Learning Subjective Opinions: An Application in Forensic Chemistry
by Anuradha Akmeemana and Michael E. Sigman
Algorithms 2025, 18(8), 482; https://doi.org/10.3390/a18080482 - 4 Aug 2025
Viewed by 134
Abstract
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble [...] Read more.
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble of ML models to previously unseen validation data were fitted to a beta distribution. The shape parameters for the fitted distribution were used to calculate the subjective opinion of sample membership into one of two mutually exclusive classes. The subjective opinion consists of belief, disbelief and uncertainty masses. A subjective opinion for each validation sample allows identification of high-uncertainty predictions. The projected probabilities of the validation opinions were used to calculate log-likelihood ratio scores and generate receiver operating characteristic (ROC) curves from which an opinion-supported decision can be made. Three very different ML models, linear discriminant analysis (LDA), random forest (RF), and support vector machines (SVM) were applied to the two-state classification problem in the analysis of forensic fire debris samples. For each ML method, a set of 100 ML models was trained on data sets bootstrapped from 60,000 in silico samples. The impact of training data set size on opinion uncertainty and ROC area under the curve (AUC) were studied. The median uncertainty for the validation data was smallest for LDA ML and largest for the SVM ML. The median uncertainty continually decreased as the size of the training data set increased for all ML.The AUC for ROC curves based on projected probabilities was largest for the RF model and smallest for the LDA method. The ROC AUC was statistically unchanged for LDA at training data sets exceeding 200 samples; however, the AUC increased with increasing sample size for the RF and SVM methods. The SVM method, the slowest to train, was limited to a maximum of 20,000 training samples. All three ML methods showed increasing performance when the validation data was limited to higher ignitable liquid contributions. An ensemble of 100 RF ML models, each trained on 60,000 in silico samples, performed the best with a median uncertainty of 1.39x102 and ROC AUC of 0.849 for all validation samples. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Graphical abstract

18 pages, 2724 KiB  
Article
Uncertainty-Aware Earthquake Forecasting Using a Bayesian Neural Network with Elastic Weight Consolidation
by Changchun Liu, Yuting Li, Huijuan Gao, Lin Feng and Xinqian Wu
Buildings 2025, 15(15), 2718; https://doi.org/10.3390/buildings15152718 - 1 Aug 2025
Viewed by 99
Abstract
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting [...] Read more.
Effective earthquake early warning (EEW) is essential for disaster prevention in the built environment, enabling a rapid structural response, system shutdown, and occupant evacuation to mitigate damage and casualties. However, most current EEW systems lack rigorous reliability analyses of their predictive outcomes, limiting their effectiveness in real-world scenarios—especially for on-site warnings, where data are limited and time is critical. To address these challenges, we propose a Bayesian neural network (BNN) framework based on Stein variational gradient descent (SVGD). By performing Bayesian inference, we estimate the posterior distribution of the parameters, thus outputting a reliability analysis of the prediction results. In addition, we incorporate a continual learning mechanism based on elastic weight consolidation, allowing the system to adapt quickly without full retraining. Our experiments demonstrate that our SVGD-BNN model significantly outperforms traditional peak displacement (Pd)-based approaches. In a 3 s time window, the Pearson correlation coefficient R increases by 9.2% and the residual standard deviation SD decreases by 24.4% compared to a variational inference (VI)-based BNN. Furthermore, the prediction variance generated by the model can effectively reflect the uncertainty of the prediction results. The continual learning strategy reduces the training time by 133–194 s, enhancing the system’s responsiveness. These features make the proposed framework a promising tool for real-time, reliable, and adaptive EEW—supporting disaster-resilient building design and operation. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

13 pages, 272 KiB  
Article
Asymptotic Behavior of the Bayes Estimator of a Regression Curve
by Agustín G. Nogales
Mathematics 2025, 13(14), 2319; https://doi.org/10.3390/math13142319 - 21 Jul 2025
Viewed by 148
Abstract
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also [...] Read more.
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also derived. The Bayes estimator of a regression curve is the regression curve with respect to the posterior predictive distribution. The result is general enough to cover discrete and continuous cases, parametric or nonparametric, and no specific supposition is made about the prior distribution. Some examples, two of them of a nonparametric nature, are given to illustrate the main result; one of the nonparametric examples exhibits a situation where the estimation of the regression curve has an optimal solution, although the problem of estimating the density is meaningless. An important role in the demonstration of these results is the establishment of a probability space as an adequate framework to address the problem of estimating regression curves from the Bayesian point of view, putting at our disposal powerful probabilistic tools in that endeavor. Full article
(This article belongs to the Section D1: Probability and Statistics)
31 pages, 8853 KiB  
Article
Atomistic-Based Fatigue Property Normalization Through Maximum A Posteriori Optimization in Additive Manufacturing
by Mustafa Awd, Lobna Saeed and Frank Walther
Materials 2025, 18(14), 3332; https://doi.org/10.3390/ma18143332 - 15 Jul 2025
Viewed by 370
Abstract
This work presents a multiscale, microstructure-aware framework for predicting fatigue strength distributions in additively manufactured (AM) alloys—specifically, laser powder bed fusion (L-PBF) AlSi10Mg and Ti-6Al-4V—by integrating density functional theory (DFT), instrumented indentation, and Bayesian inference. The methodology leverages principles common to all 3D [...] Read more.
This work presents a multiscale, microstructure-aware framework for predicting fatigue strength distributions in additively manufactured (AM) alloys—specifically, laser powder bed fusion (L-PBF) AlSi10Mg and Ti-6Al-4V—by integrating density functional theory (DFT), instrumented indentation, and Bayesian inference. The methodology leverages principles common to all 3D printing (additive manufacturing) processes: layer-wise material deposition, process-induced defect formation (such as porosity and residual stress), and microstructural tailoring through parameter control, which collectively differentiate AM from conventional manufacturing. By linking DFT-derived cohesive energies with indentation-based modulus measurements and a MAP-based statistical model, we quantify the effect of additive-manufactured microstructural heterogeneity on fatigue performance. Quantitative validation demonstrates that the predicted fatigue strength distributions agree with experimental high-cycle and very-high-cycle fatigue (HCF/VHCF) data, with posterior modes and 95 % credible intervals of σ^fAlSi10Mg=867+8MPa and σ^fTi6Al4V=1159+10MPa, respectively. The resulting Woehler (S–N) curves and Paris crack-growth parameters envelop more than 92 % of the measured coupon data, confirming both accuracy and robustness. Furthermore, global sensitivity analysis reveals that volumetric porosity and residual stress account for over 70 % of the fatigue strength variance, highlighting the central role of process–structure relationships unique to AM. The presented framework thus provides a predictive, physically interpretable, and data-efficient pathway for microstructure-informed fatigue design in additively manufactured metals, and is readily extensible to other AM alloys and process variants. Full article
(This article belongs to the Topic Multi-scale Modeling and Optimisation of Materials)
Show Figures

Figure 1

24 pages, 2253 KiB  
Article
Modeling Spatial Data with Heteroscedasticity Using PLVCSAR Model: A Bayesian Quantile Regression Approach
by Rongshang Chen and Zhiyong Chen
Entropy 2025, 27(7), 715; https://doi.org/10.3390/e27070715 - 1 Jul 2025
Viewed by 322
Abstract
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model [...] Read more.
Spatial data not only enables smart cities to visualize, analyze, and interpret data related to location and space, but also helps departments make more informed decisions. We apply a Bayesian quantile regression (BQR) of the partially linear varying coefficient spatial autoregressive (PLVCSAR) model for spatial data to improve the prediction of performance. It can be used to capture the response of covariates to linear and nonlinear effects at different quantile points. Through an approximation of the nonparametric functions with free-knot splines, we develop a Bayesian sampling approach that can be applied by the Markov chain Monte Carlo (MCMC) approach and design an efficient Metropolis–Hastings within the Gibbs sampling algorithm to explore the joint posterior distributions. Computational efficiency is achieved through a modified reversible-jump MCMC algorithm incorporating adaptive movement steps to accelerate chain convergence. The simulation results demonstrate that our estimator exhibits robustness to alternative spatial weight matrices and outperforms both quantile regression (QR) and instrumental variable quantile regression (IVQR) in a finite sample at different quantiles. The effectiveness of the proposed model and estimation method is demonstrated by the use of real data from the Boston median house price. Full article
(This article belongs to the Special Issue Bayesian Hierarchical Models with Applications)
Show Figures

Figure 1

22 pages, 327 KiB  
Article
Bayesian Analysis of the Doubly Truncated Zubair-Weibull Distribution: Parameter Estimation, Reliability, Hazard Rate and Prediction
by Zakiah I. Kalantan, Mai A. Hegazy, Abeer A. EL-Helbawy, Hebatalla H. Mohammad, Doaa S. A. Soliman, Gannat R. AL-Dayian and Mervat K. Abd Elaal
Axioms 2025, 14(7), 502; https://doi.org/10.3390/axioms14070502 - 26 Jun 2025
Viewed by 248
Abstract
This paper discusses the Bayesian estimation for the unknown parameters, reliability and hazard rate functions of the doubly truncated Zubair-Weibull distribution. Informative priors (gamma distribution) for the parameters are used to obtain the posterior distributions. Under the squared-error and linear–exponential loss functions, the [...] Read more.
This paper discusses the Bayesian estimation for the unknown parameters, reliability and hazard rate functions of the doubly truncated Zubair-Weibull distribution. Informative priors (gamma distribution) for the parameters are used to obtain the posterior distributions. Under the squared-error and linear–exponential loss functions, the Bayes estimators are derived. Credible intervals for the parameters, reliability and hazard rate functions are obtained. Bayesian prediction (point and interval) for the future observation is considered under the two-sample prediction scheme. A simulation study is performed using the Markov Chain Monte Carlo algorithm of simulation for different sample sizes to assess the performance of the estimators. Two real datasets are applied to show the flexibility and applicability of the distribution. Full article
12 pages, 1930 KiB  
Article
Histological Analysis of Intracranial Cerebral Arteries for Elastin Thickness, Wall Thickness, and Vessel Diameters: An Atlas for Computational Modeling and a Proposed Predictive Multivariable Model of Elastin Thickness
by Nishanth Thiyagarajah, Alex Witek, Mark Davison, Robert Butler, Ahmet Erdemir, John Tsiang, Mohammed Shazam Hussain, Richard Prayson, Mark Bain and Nina Z. Moore
J. Clin. Med. 2025, 14(12), 4320; https://doi.org/10.3390/jcm14124320 - 17 Jun 2025
Viewed by 428
Abstract
Background/Objectives: Fluid dynamic models of the cerebral vasculature are being developed to evaluate intracranial vascular pathology. Fluid–structure interaction modeling provides an opportunity for more accurate simulation of vascular pathology by modelling the vessel wall itself in conjunction with the fluid forces. Accuracy of [...] Read more.
Background/Objectives: Fluid dynamic models of the cerebral vasculature are being developed to evaluate intracranial vascular pathology. Fluid–structure interaction modeling provides an opportunity for more accurate simulation of vascular pathology by modelling the vessel wall itself in conjunction with the fluid forces. Accuracy of these models is heavily dependent on the parameters used. Of those studied, elastin has been considered a key component used in aortic and common carotid artery modeling. We studied elastin thickness to determine if there was significant variation between cerebral artery territories to suggest its importance in cerebral blood vessel biomechanical response and provide reference data for modeling intracranial elastin. Elastin thickness was compared to vessel location, thickness, diameter, and laterality within human intracranial arteries. Methods: Tissue was taken from five human cadaveric heads preserved in formaldehyde from each intracranial vessel distribution bilaterally and stained with Van Gieson stain for elastin. A total of 160 normal cerebral vascular artery specimens were obtained from 17 different cerebrovascular regions. Two reviewers measured elastin thickness for each sample at five different locations per sample using Aperio ImageScope (Leica Biosystems, Deer Park, IL, USA). Statistical analysis of the samples was performed using mixed-models repeated measures regression methods. Results: There was a significant difference between anterior circulation (6.01 µm) and posterior circulation (4.4 µm) vessel elastin thickness (p-value < 0.05). Additionally, two predictive models of elastin thickness were presented, utilizing a combination of anterior versus posterior circulation, vessel diameter, and vessel wall thickness, which demonstrated significance for prediction with anterior versus posterior combined with vessel diameter and wall thickness. Conclusions: Elastin thicknesses are significantly different between anterior and posterior circulation vessels, which may explain the differences seen in aneurysm rupture risk for anterior versus posterior circulation aneurysms. Additionally, we propose two potential models for predicting elastin thickness based on vessel location, vessel diameter, and vessel wall thickness, all of which can be obtained using preoperative imaging techniques. These findings suggest that elastin plays an important role in cerebral vascular wall integrity, and this data will further enable fluid–structure interaction modeling parameters to be more precise in an effort to provide predictive modeling for cerebrovascular pathology. Full article
(This article belongs to the Special Issue Personalized Diagnosis and Treatment for Intracranial Aneurysm)
Show Figures

Figure 1

10 pages, 962 KiB  
Article
IOL Power Calculation After Laser-Based Refractive Surgery: Measured vs. Predicted Posterior Corneal Astigmatism Using the Barrett True-K Formula
by Giacomo De Rosa, Daniele Criscuolo, Laura Longo, Davide Allegrini and Mario R. Romano
J. Clin. Med. 2025, 14(11), 4010; https://doi.org/10.3390/jcm14114010 - 5 Jun 2025
Viewed by 707
Abstract
Background/Objectives: This study assessed the reliability of the Barrett True-K formula in patients who had undergone laser-based corneal refractive surgery by comparing outcomes using measured vs. predicted posterior corneal astigmatism (PCA) within the Barrett True-K No History formula. Methods: We selected 49 [...] Read more.
Background/Objectives: This study assessed the reliability of the Barrett True-K formula in patients who had undergone laser-based corneal refractive surgery by comparing outcomes using measured vs. predicted posterior corneal astigmatism (PCA) within the Barrett True-K No History formula. Methods: We selected 49 eyes out of 41 patients with a history of uncomplicated laser visual correction (LVC) that underwent cataract surgery between 2020 and 2024. The Front K1 and K2, the Back K1 and K2, the anterior chamber depth, the lens thickness, the horizontal white-to-white, and the central corneal thickness were measured using Pentacam. The axial length was measured using the IOL Master 500 or NIDEK AL-Scan. These data were then imported into the freely available online Barrett True-K calculator for post-LVC eyes, and the postoperative results were compared with the predicted IOL target. The cumulative distribution of the refractive prediction error, absolute refractive prediction error, and refractive prediction error were calculated as the difference between the postoperative spherical equivalent and the expected spherical equivalent for both the predicted and measured PCA calculations. Results: The results suggest improved accuracy with the Barrett True-K formula when incorporating measured PCA values, supporting the use of corneal tomography for optimized refractive outcomes in post-LVC cataract patients. Conclusions: It is always advisable to measure the posterior corneal surface using corneal tomography in all patients who have undergone LVC to achieve better refractive outcomes after cataract surgery. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

21 pages, 355 KiB  
Article
Multivariate Bayesian Global–Local Shrinkage Methods for Regularisation in the High-Dimensional Linear Model
by Valentina Mameli, Debora Slanzi, Jim E. Griffin and Philip J. Brown
Mathematics 2025, 13(11), 1812; https://doi.org/10.3390/math13111812 - 29 May 2025
Viewed by 594
Abstract
This paper considers Bayesian regularisation using global–local shrinkage priors in the multivariate general linear model when there are many more explanatory variables than observations. We adopt priors’ structures used extensively in univariate problems (conjugate and non-conjugate with tail behaviour ranging from polynomial to [...] Read more.
This paper considers Bayesian regularisation using global–local shrinkage priors in the multivariate general linear model when there are many more explanatory variables than observations. We adopt priors’ structures used extensively in univariate problems (conjugate and non-conjugate with tail behaviour ranging from polynomial to exponential) and consider how the addition of error correlation in the multivariate set-up affects the performance of these priors. Two different datasets (from drug discovery and chemometrics) with many covariates are used for comparison, and these are supplemented by a small simulation study to corroborate the role of error correlation. We find that structural assumptions of the prior distribution on regression coefficients can be more significant than the tail behaviour. In particular, if the structural assumption of conjugacy is used, the performance of the posterior predictive distribution deteriorates relative to non-conjugate choices as the error correlation becomes stronger. Full article
(This article belongs to the Special Issue Multivariate Statistical Analysis and Application)
Show Figures

Figure 1

16 pages, 1719 KiB  
Article
Finite Element Analysis of Ocular Impact Forces and Potential Complications in Pickleball-Related Eye Injuries
by Cezary Rydz, Jose A. Colmenarez, Kourosh Shahraki, Pengfei Dong, Linxia Gu and Donny W. Suh
Bioengineering 2025, 12(6), 570; https://doi.org/10.3390/bioengineering12060570 - 26 May 2025
Viewed by 528
Abstract
Purpose: Pickleball, the fastest-growing sport in the United States, has seen a rapid increase in participation across all age groups, particularly among older adults. However, the sport introduces specific risks for ocular injuries due to the unique dynamics of gameplay and the physical [...] Read more.
Purpose: Pickleball, the fastest-growing sport in the United States, has seen a rapid increase in participation across all age groups, particularly among older adults. However, the sport introduces specific risks for ocular injuries due to the unique dynamics of gameplay and the physical properties of the pickleball. This study aims to explore the mechanisms of pickleball-related eye injuries, utilizing finite element modeling (FEM) to simulate ocular trauma and better understand injury mechanisms. Methods: A multi-modal approach was employed to investigate pickleball-related ocular injuries. Finite element modeling (FEM) was used to simulate blunt trauma to the eye caused by a pickleball. The FEM incorporated detailed anatomical models of the periorbital structures, cornea, sclera, and vitreous body, using hyperelastic material properties derived from experimental data. The simulations evaluated various impact scenarios, including changes in ball velocity, angle of impact, and material stiffness, to determine the stress distribution, peak strain, and deformation in ocular structures. The FEM outputs were correlated with clinical findings to validate the injury mechanisms. Results: The FE analysis revealed that the rigid, hard-plastic construction of a pickleball results in concentrated stress and strain transfer to ocular structures upon impact. At velocities exceeding 30 mph, simulations showed significant corneal deformation, with peak stresses localized at the limbus and anterior sclera. Moreover, our results show a significant stress applied to lens zonules (as high as 0.35 MPa), leading to potential lens dislocation. Posterior segment deformation was also observed, with high strain levels in the retina and vitreous, consistent with clinical observations of retinal tears and vitreous hemorrhage. Validation against reported injuries confirmed the model’s accuracy in predicting both mild injuries (e.g., corneal abrasions) and severe outcomes (e.g., hyphema, globe rupture). Conclusions: Finite element analysis provides critical insights into the biomechanical mechanisms underlying pickleball-related ocular injuries. The findings underscore the need for preventive measures, particularly among older adults, who exhibit age-related vulnerabilities. Education on the importance of wearing protective eyewear and optimizing game rules to minimize high-risk scenarios, such as close-range volleys, is essential. Further refinement of the FEM, including parametric studies and integration of protective eyewear, can guide the development of safety standards and reduce the socio-economic burden of these injuries. Full article
(This article belongs to the Special Issue Biomechanics Studies in Ophthalmology)
Show Figures

Figure 1

11 pages, 685 KiB  
Article
Integrating Radiomics and Lesion Mapping for Cerebellar Mutism Syndrome Prediction
by Xinyi Chai, Wei Yang, Yingjie Cai, Xiaojiao Peng, Xuemeng Qiu, Miao Ling, Ping Yang, Jiashu Chen, Hong Zhang, Wenping Ma, Xin Ni and Ming Ge
Children 2025, 12(6), 667; https://doi.org/10.3390/children12060667 - 23 May 2025
Viewed by 378
Abstract
Objective: To develop and validate a composite model that combines lesion–symptom mapping (LSM), radiomic information, and clinical factors for predicting cerebellar mutism syndrome in pediatric patients suffering from posterior fossa tumors. Methods: A retrospective analysis was conducted on a cohort of 247 (training [...] Read more.
Objective: To develop and validate a composite model that combines lesion–symptom mapping (LSM), radiomic information, and clinical factors for predicting cerebellar mutism syndrome in pediatric patients suffering from posterior fossa tumors. Methods: A retrospective analysis was conducted on a cohort of 247 (training set, n = 174; validation set, n = 73) pediatric patients diagnosed with posterior fossa tumors who underwent surgery at Beijing Children’s Hospital. Presurgical MRIs were used to extract the radiomics features and voxel distribution features. Clinical factors were derived from the medical records. Group comparison was used to identify the clinical risk factors of CMS. Combining location weight, radiomic features from tumor area and the significant intersection area, and clinical variables, hybrid models were developed and validated using multiple machine learning models. Results: The mean age of the cohort was 4.88 [2.89, 7.78] years, with 143 males and 104 females. Among them, 73 (29.6%) patients developed CMS. Gender, location, weight, and five radiomic features (three in the tumor mask area and two in the intersection area) were selected to build the model. The four models, KNN model, GBM model, RF model, and LR model, achieved high predictive performance, with AUCs of 0.84, 0.83, 0.81, and 0.87, respectively. Conclusions: CMS can be predicted using MRI features and clinical factors. The combination of radiomics and tumoral location weight could improve the prediction of CMS. Full article
(This article belongs to the Section Pediatric Hematology & Oncology)
Show Figures

Figure 1

31 pages, 4528 KiB  
Article
Probabilistic Prediction Model for Ultimate Conditions Under Compression of FRP-Wrapped Concrete Columns Based on Bayesian Inference
by Feng Cao, Ran Zhu, Jun-Xing Zheng, Hai-Bin Huang and Dong Liang
Buildings 2025, 15(10), 1720; https://doi.org/10.3390/buildings15101720 - 19 May 2025
Viewed by 501
Abstract
The compressive strength and ultimate strain of FRP-confined concrete cylinders are the key indicators for evaluating their mechanical properties. Accurate prediction of compressive strength and ultimate strain is essential for reliability analysis and design of such components. However, the existing ultimate condition under [...] Read more.
The compressive strength and ultimate strain of FRP-confined concrete cylinders are the key indicators for evaluating their mechanical properties. Accurate prediction of compressive strength and ultimate strain is essential for reliability analysis and design of such components. However, the existing ultimate condition under compression models lack sufficient prediction accuracy, and the results exhibit significant uncertainty. This study proposes a Bayesian model updating method based on Markov Chain Monte Carlo (MCMC) sampling to improve the prediction accuracy of the ultimate condition under compression for FRP-confined concrete cylinders and to quantify the uncertainty of the prediction results. First of all, 1016 sets of experimental data on the ultimate condition under compression of FRP-confined concrete cylinders from previous studies were collected. Subsequently, the probabilistic updating model and evaluation system were established based on Bayesian parameter estimation principle, MCMC sampling, WAIC, and DIC. Then, several representative empirical models for predicting the ultimate condition under compression are selected, and their prediction performance is evaluated using the experimental data. Finally, a Bayesian updating problem is established for typical ultimate condition under compression models, and the posterior distributions of model parameters are obtained using MCMC sampling to select the best model, and the prediction performance of the optimal model is assessed using the experimental data. The results show that, compared with existing empirical models, the Bayesian inference-based probabilistic calculation model provides predictions closer to the experimental values, while also reasonably quantifying the uncertainty of the ultimate condition under compression prediction. Full article
Show Figures

Figure 1

13 pages, 2612 KiB  
Article
Application of Bayesian Statistics in Analyzing and Predicting Carburizing-Induced Dimensional Changes in Torsion Bars
by Guojin Sun, Zhenggui Li, Yanxiong Jiao and Qi Wang
Metals 2025, 15(5), 546; https://doi.org/10.3390/met15050546 - 15 May 2025
Viewed by 417
Abstract
This study investigates the application of Bayesian statistical methods to analyze and predict the dimensional changes in torsion bars made from 20CrMnTi alloy steel during carburizing heat treatment. The process parameters, including a treatment temperature of 920 °C followed by oil quenching, were [...] Read more.
This study investigates the application of Bayesian statistical methods to analyze and predict the dimensional changes in torsion bars made from 20CrMnTi alloy steel during carburizing heat treatment. The process parameters, including a treatment temperature of 920 °C followed by oil quenching, were selected to optimize surface hardness while maintaining core toughness. The dimensional changes were measured pre- and post-treatment using precise caliper measurements. Bayesian statistics, particularly conjugate normal distributions, were utilized to model the dimensional variations, providing both posterior and predictive distributions. These models revealed a marked concentration of the posterior distributions, indicating enhanced accuracy in predicting dimensional changes. The findings offer valuable insights for improving the control of carburizing-induced deformations, thereby ensuring the dimensional integrity and performance reliability of torsion bars used in high-stress applications such as pneumatic clutch systems in mining ball mills. This study underscores the potential of Bayesian approaches in advancing precision engineering and contributes to the broader field of statistical modeling in manufacturing processes. Full article
(This article belongs to the Special Issue Numerical and Experimental Advances in Metal Processing)
Show Figures

Figure 1

36 pages, 3107 KiB  
Article
Estimating Calibrated Risks Using Focal Loss and Gradient-Boosted Trees for Clinical Risk Prediction
by Henry Johnston, Nandini Nair and Dongping Du
Electronics 2025, 14(9), 1838; https://doi.org/10.3390/electronics14091838 - 30 Apr 2025
Viewed by 1650
Abstract
Probability calibration and decision threshold selection are fundamental aspects of risk prediction and classification, respectively. A strictly proper loss function is used in clinical risk prediction applications to encourage a model to predict calibrated class-posterior probabilities or risks. Recent studies have shown that [...] Read more.
Probability calibration and decision threshold selection are fundamental aspects of risk prediction and classification, respectively. A strictly proper loss function is used in clinical risk prediction applications to encourage a model to predict calibrated class-posterior probabilities or risks. Recent studies have shown that training with focal loss can improve the discriminatory power of gradient-boosted decision trees (GBDT) for classification tasks with an imbalanced or skewed class distribution. However, the focal loss function is not a strictly proper loss function. Therefore, the output of GBDT trained using focal loss is not an accurate estimate of the true class-posterior probability. This study aims to address the issue of poor calibration of GBDT trained using focal loss in the context of clinical risk prediction applications. The methodology utilizes a closed-form transformation of the confidence scores of GBDT trained with focal loss to estimate calibrated risks. The closed-form transformation relates the focal loss minimizer and the true-class posterior probability. Algorithms based on Bayesian hyperparameter optimization are provided to choose the focal loss parameter that optimizes discriminatory power and calibration, as measured by the Brier score metric. We assess how the calibration of the confidence scores affects the selection of a decision threshold to optimize the balanced accuracy, defined as the arithmetic mean of sensitivity and specificity. The effectiveness of the proposed strategy was evaluated using lung transplant data extracted from the Scientific Registry of Transplant Recipients (SRTR) for predicting post-transplant cancer. The proposed strategy was also evaluated using data from the Behavioral Risk Factor Surveillance System (BRFSS) for predicting diabetes status. Probability calibration plots, calibration slope and intercept, and the Brier score show that the approach improves calibration while maintaining the same discriminatory power according to the area under the receiver operating characteristics curve (AUROC) and the H-measure. The calibrated focal-aware XGBoost achieved an AUROC, Brier score, and calibration slope of 0.700, 0.128, and 0.968 for predicting the 10-year cancer risk, respectively. The miscalibrated focal-aware XGBoost achieved equal AUROC but a worse Brier score and calibration slope (0.140 and 1.579). The proposed method compared favorably to the standard XGBoost trained using cross-entropy loss (AUROC of 0.755 versus 0.736 in predicting the 1-year risk of cancer). Comparable performance was observed with other risk prediction models in the diabetes prediction task. Full article
(This article belongs to the Special Issue Data-Centric Artificial Intelligence: New Methods for Data Processing)
Show Figures

Figure 1

Back to TopTop