Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (450)

Search Parameters:
Keywords = Bayes estimate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1508 KiB  
Article
Genomic Prediction of Adaptation in Common Bean (Phaseolus vulgaris L.) × Tepary Bean (P. acutifolius A. Gray) Hybrids
by Felipe López-Hernández, Diego F. Villanueva-Mejía, Adriana Patricia Tofiño-Rivera and Andrés J. Cortés
Int. J. Mol. Sci. 2025, 26(15), 7370; https://doi.org/10.3390/ijms26157370 - 30 Jul 2025
Viewed by 246
Abstract
Climate change is jeopardizing global food security, with at least 713 million people facing hunger. To face this challenge, legumes as common beans could offer a nature-based solution, sourcing nutrients and dietary fiber, especially for rural communities in Latin America and Africa. However, [...] Read more.
Climate change is jeopardizing global food security, with at least 713 million people facing hunger. To face this challenge, legumes as common beans could offer a nature-based solution, sourcing nutrients and dietary fiber, especially for rural communities in Latin America and Africa. However, since common beans are generally heat and drought susceptible, it is imperative to speed up their molecular introgressive adaptive breeding so that they can be cultivated in regions affected by extreme weather. Therefore, this study aimed to couple an advanced panel of common bean (Phaseolus vulgaris L.) × tolerant Tepary bean (P. acutifolius A. Gray) interspecific lines with Bayesian regression algorithms to forecast adaptation to the humid and dry sub-regions at the Caribbean coast of Colombia, where the common bean typically exhibits maladaptation to extreme heat waves. A total of 87 advanced lines with hybrid ancestries were successfully bred, surpassing the interspecific incompatibilities. This hybrid panel was genotyped by sequencing (GBS), leading to the discovery of 15,645 single-nucleotide polymorphism (SNP) markers. Three yield components (yield per plant, and number of seeds and pods) and two biomass variables (vegetative and seed biomass) were recorded for each genotype and inputted in several Bayesian regression models to identify the top genotypes with the best genetic breeding values across three localities on the Colombian coast. We comparatively analyzed several regression approaches, and the model with the best performance for all traits and localities was BayesC. Also, we compared the utilization of all markers and only those determined as associated by a priori genome-wide association studies (GWAS) models. Better prediction ability with the complete SNP set was indicative of missing heritability as part of GWAS reconstructions. Furthermore, optimal SNP sets per trait and locality were determined as per the top 500 most explicative markers according to their β regression effects. These 500 SNPs, on average, overlapped in 5.24% across localities, which reinforced the locality-dependent nature of polygenic adaptation. Finally, we retrieved the genomic estimated breeding values (GEBVs) and selected the top 10 genotypes for each trait and locality as part of a recommendation scheme targeting narrow adaption in the Caribbean. After validation in field conditions and for screening stability, candidate genotypes and SNPs may be used in further introgressive breeding cycles for adaptation. Full article
(This article belongs to the Special Issue Plant Breeding and Genetics: New Findings and Perspectives)
Show Figures

Figure 1

16 pages, 666 KiB  
Article
Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring
by Rajni Goel, Mahmoud M. Abdelwahab and Mustafa M. Hasaballah
Axioms 2025, 14(8), 573; https://doi.org/10.3390/axioms14080573 - 25 Jul 2025
Viewed by 166
Abstract
Accurate modeling of product lifetimes is vital in reliability analysis and engineering to ensure quality and maintain competitiveness. This paper proposes the progressively randomly censored Maxwell distribution, which incorporates both progressive Type-II and random censoring within the Maxwell distribution framework. The model allows [...] Read more.
Accurate modeling of product lifetimes is vital in reliability analysis and engineering to ensure quality and maintain competitiveness. This paper proposes the progressively randomly censored Maxwell distribution, which incorporates both progressive Type-II and random censoring within the Maxwell distribution framework. The model allows for the planned removal of surviving units at specific stages of an experiment, accounting for both deliberate and random censoring events. It is assumed that survival and censoring times each follow a Maxwell distribution, though with distinct parameters. Both frequentist and Bayesian approaches are employed to estimate the model parameters. In the frequentist approach, maximum likelihood estimators and their corresponding confidence intervals are derived. In the Bayesian approach, Bayes estimators are obtained using an inverse gamma prior and evaluated through a Markov Chain Monte Carlo (MCMC) method under the squared error loss function (SELF). A Monte Carlo simulation study evaluates the performance of the proposed estimators. The practical relevance of the methodology is demonstrated using a real data set. Full article
Show Figures

Figure 1

26 pages, 2658 KiB  
Article
An Efficient and Accurate Random Forest Node-Splitting Algorithm Based on Dynamic Bayesian Methods
by Jun He, Zhanqi Li and Linzi Yin
Mach. Learn. Knowl. Extr. 2025, 7(3), 70; https://doi.org/10.3390/make7030070 - 21 Jul 2025
Viewed by 253
Abstract
Random Forests are powerful machine learning models widely applied in classification and regression tasks due to their robust predictive performance. Nevertheless, traditional Random Forests face computational challenges during tree construction, particularly in high-dimensional data or on resource-constrained devices. In this paper, a novel [...] Read more.
Random Forests are powerful machine learning models widely applied in classification and regression tasks due to their robust predictive performance. Nevertheless, traditional Random Forests face computational challenges during tree construction, particularly in high-dimensional data or on resource-constrained devices. In this paper, a novel node-splitting algorithm, BayesSplit, is proposed to accelerate decision tree construction via a Bayesian-based impurity estimation framework. BayesSplit treats impurity reduction as a Bernoulli event with Beta-conjugate priors for each split point and incorporates two main strategies. First, Dynamic Posterior Parameter Refinement updates the Beta parameters based on observed impurity reductions in batch iterations. Second, Posterior-Derived Confidence Bounding establishes statistical confidence intervals, efficiently filtering out suboptimal splits. Theoretical analysis demonstrates that BayesSplit converges to optimal splits with high probability, while experimental results show up to a 95% reduction in training time compared to baselines and maintains or exceeds generalization performance. Compared to the state-of-the-art MABSplit, BayesSplit achieves similar accuracy on classification tasks and reduces regression training time by 20–70% with lower MSEs. Furthermore, BayesSplit enhances feature importance stability by up to 40%, making it particularly suitable for deployment in computationally constrained environments. Full article
Show Figures

Figure 1

13 pages, 272 KiB  
Article
Asymptotic Behavior of the Bayes Estimator of a Regression Curve
by Agustín G. Nogales
Mathematics 2025, 13(14), 2319; https://doi.org/10.3390/math13142319 - 21 Jul 2025
Viewed by 144
Abstract
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also [...] Read more.
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also derived. The Bayes estimator of a regression curve is the regression curve with respect to the posterior predictive distribution. The result is general enough to cover discrete and continuous cases, parametric or nonparametric, and no specific supposition is made about the prior distribution. Some examples, two of them of a nonparametric nature, are given to illustrate the main result; one of the nonparametric examples exhibits a situation where the estimation of the regression curve has an optimal solution, although the problem of estimating the density is meaningless. An important role in the demonstration of these results is the establishment of a probability space as an adequate framework to address the problem of estimating regression curves from the Bayesian point of view, putting at our disposal powerful probabilistic tools in that endeavor. Full article
(This article belongs to the Section D1: Probability and Statistics)
59 pages, 11250 KiB  
Article
Automated Analysis of Vertebral Body Surface Roughness for Adult Age Estimation: Ellipse Fitting and Machine-Learning Approach
by Erhan Kartal and Yasin Etli
Diagnostics 2025, 15(14), 1794; https://doi.org/10.3390/diagnostics15141794 - 16 Jul 2025
Viewed by 287
Abstract
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT [...] Read more.
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT images. Methods: CT scans of 176 adults (94 males, 82 females; 21–94 years) were retrospectively analyzed. For each vertebra, the mean orthogonal deviation of the anterior superior endplate from an ideal ellipse was extracted. Sex-specific multiple linear regression served as a baseline; support vector regression (SVR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve-Bayes pseudo-regressor (GNB-R) were tuned with 10-fold cross-validation and evaluated on a 20% hold-out set. Performance was quantified with the standard error of the estimate (SEE). Results: DS values correlated moderately to strongly with age (peak r = 0.60 at L3–L5). Linear regression explained 40% (males) and 47% (females) of age variance (SEE ≈ 11–12 years). Non-parametric learners improved precision: RF achieved an SEE of 8.49 years in males (R2 = 0.47), whereas k-NN attained 10.8 years (R2 = 0.45) in women. Conclusions: Automated analysis of vertebral cortical roughness provides a transparent, observer-independent means of estimating adult age with accuracy approaching that of more complex deep learning pipelines. Streamlining image preparation and validating the approach across diverse populations are the next steps toward forensic adoption. Full article
(This article belongs to the Special Issue New Advances in Forensic Radiology and Imaging)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
Multi-Level Depression Severity Detection with Deep Transformers and Enhanced Machine Learning Techniques
by Nisar Hussain, Amna Qasim, Gull Mehak, Muhammad Zain, Grigori Sidorov, Alexander Gelbukh and Olga Kolesnikova
AI 2025, 6(7), 157; https://doi.org/10.3390/ai6070157 - 15 Jul 2025
Viewed by 680
Abstract
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed [...] Read more.
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed in this study, and posts are classified into four levels: minimum, mild, moderate, and severe. We take a dual approach using classical machine learning (ML) algorithms and recent Transformer-based architectures. For the ML track, we build ten classifiers, including Logistic Regression, SVM, Naive Bayes, Random Forest, XGBoost, Gradient Boosting, K-NN, Decision Tree, AdaBoost, and Extra Trees, with two recently proposed embedding methods, Word2Vec and GloVe embeddings, and we fine-tune them for mental health text classification. Of these, XGBoost yields the highest F1-score of 94.01 using GloVe embeddings. For the deep learning track, we fine-tune ten Transformer models, covering BERT, RoBERTa, XLM-RoBERTa, MentalBERT, BioBERT, RoBERTa-large, DistilBERT, DeBERTa, Longformer, and ALBERT. The highest performance was achieved by the MentalBERT model, with an F1-score of 97.31, followed by RoBERTa (96.27) and RoBERTa-large (96.14). Our results demonstrate that, to the best of the authors’ knowledge, domain-transferred Transformers outperform non-Transformer-based ML methods in capturing subtle linguistic cues indicative of different levels of depression, thereby highlighting their potential for fine-grained mental health monitoring in online settings. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 215
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 2591 KiB  
Article
The Impact of Compound Drought and Heatwave Events on the Gross Primary Productivity of Rubber Plantations
by Qinggele Bao, Ziqin Wang and Zhongyi Sun
Forests 2025, 16(7), 1146; https://doi.org/10.3390/f16071146 - 11 Jul 2025
Viewed by 315
Abstract
Global climate change has increased the frequency of compound drought–heatwave events (CDHEs), seriously threatening tropical forest ecosystems. However, due to the complex structure of natural tropical forests, related research remains limited. To address this, we focused on rubber plantations on Hainan Island, which [...] Read more.
Global climate change has increased the frequency of compound drought–heatwave events (CDHEs), seriously threatening tropical forest ecosystems. However, due to the complex structure of natural tropical forests, related research remains limited. To address this, we focused on rubber plantations on Hainan Island, which have simpler structures, to explore the impacts of CDHEs on their primary productivity. We used Pearson and Spearman correlation analyses to select the optimal combination of drought and heatwave indices. Then, we constructed a Compound Drought–Heatwave Index (CDHI) using Copula functions to describe the temporal patterns of CDHEs. Finally, we applied a Bayes–Copula conditional probability model to estimate the probability of GPP loss under CDHE conditions. The main findings are as follows: (1) The Standardized Precipitation Evapotranspiration Index (SPEI-3) and Standardized Temperature Index (STI-1) formed the best index combination. (2) The CDHI successfully identified typical CDHEs in 2001, 2003–2005, 2010, 2015–2016, and 2020. (3) Temporally, CDHEs significantly increased the probability of GPP loss in April and May (0.58 and 0.64, respectively), while the rainy season showed a reverse trend due to water buffering (lowest in October, at 0.19). (4) Spatially, the northwest region showed higher GPP loss probabilities, likely due to topographic uplift. This study reveals how tropical plantations respond to compound climate extremes and provides theoretical support for the monitoring and management of tropical ecosystems. Full article
(This article belongs to the Section Forest Meteorology and Climate Change)
Show Figures

Figure 1

26 pages, 4907 KiB  
Article
A Novel Approach Utilizing Bagging, Histogram Gradient Boosting, and Advanced Feature Selection for Predicting the Onset of Cardiovascular Diseases
by Norma Latif Fitriyani, Muhammad Syafrudin, Nur Chamidah, Marisa Rifada, Hendri Susilo, Dursun Aydin, Syifa Latif Qolbiyani and Seung Won Lee
Mathematics 2025, 13(13), 2194; https://doi.org/10.3390/math13132194 - 4 Jul 2025
Viewed by 314
Abstract
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study [...] Read more.
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study leverages three preprocessed cardiovascular datasets, employing the Local Outlier Factor technique for outlier removal and the information gain method for feature selection. Through rigorous experimentation, the proposed model demonstrates superior performance compared to conventional machine learning approaches, such as Logistic Regression, Support Vector Classification, Gaussian Naïve Bayes, Multi-Layer Perceptron, k-nearest neighbors, Random Forest, AdaBoost, gradient boosting, and histogram gradient boosting. Evaluation metrics, including precision, recall, F1 score, accuracy, and AUC, yielded impressive results: 93.90%, 98.83%, 96.30%, 96.25%, and 0.9916 for dataset I; 94.17%, 99.05%, 96.54%, 96.48%, and 0.9931 for dataset II; and 89.81%, 82.40%, 85.91%, 86.66%, and 0.9274 for dataset III. The findings indicate that the proposed prediction model has the potential to facilitate early CVD detection, thereby enhancing preventive strategies and improving patient outcomes. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

17 pages, 572 KiB  
Article
Statistical Analysis Under a Random Censoring Scheme with Applications
by Mustafa M. Hasaballah and Mahmoud M. Abdelwahab
Symmetry 2025, 17(7), 1048; https://doi.org/10.3390/sym17071048 - 3 Jul 2025
Cited by 1 | Viewed by 257
Abstract
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates [...] Read more.
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates for the unknown parameters are derived using the maximum likelihood estimation method, and confidence intervals are constructed based on the Fisher information matrix. From a Bayesian perspective, Bayes estimates of the parameters are obtained using the Markov Chain Monte Carlo method, and the average lengths of credible intervals are calculated. The Bayesian inference is performed under both the squared error loss function and the general entropy loss function. Additionally, a numerical simulation is conducted to evaluate the performance of the proposed methods. To demonstrate their practical applicability, a real world example is provided, illustrating the application and development of these inference techniques. In conclusion, the Bayesian method appears to outperform other approaches, although each method offers unique advantages. Full article
Show Figures

Figure 1

30 pages, 16041 KiB  
Article
Estimation of Inverted Weibull Competing Risks Model Using Improved Adaptive Progressive Type-II Censoring Plan with Application to Radiobiology Data
by Refah Alotaibi, Mazen Nassar and Ahmed Elshahhat
Symmetry 2025, 17(7), 1044; https://doi.org/10.3390/sym17071044 - 2 Jul 2025
Viewed by 333
Abstract
This study focuses on estimating the unknown parameters and the reliability function of the inverted-Weibull distribution, using an improved adaptive progressive Type-II censoring scheme under a competing risks model. Both classical and Bayesian estimation approaches are explored to offer a thorough analysis. Under [...] Read more.
This study focuses on estimating the unknown parameters and the reliability function of the inverted-Weibull distribution, using an improved adaptive progressive Type-II censoring scheme under a competing risks model. Both classical and Bayesian estimation approaches are explored to offer a thorough analysis. Under the classical approach, maximum likelihood estimators are obtained for the unknown parameters and the reliability function. Approximate confidence intervals are also constructed to assess the uncertainty in the estimates. From a Bayesian standpoint, symmetric Bayes estimates and highest posterior density credible intervals are computed using Markov Chain Monte Carlo sampling, assuming a symmetric squared error loss function. An extensive simulation study is carried out to assess how well the proposed methods perform under different experimental conditions, showing promising accuracy. To demonstrate the practical use of these methods, a real dataset is analyzed, consisting of the survival times of male mice aged 35 to 42 days after being exposed to 300 roentgens of X-ray radiation. The analysis demonstrated that the inverted Weibull distribution is well-suited for modeling the given dataset. Furthermore, the Bayesian estimation method, considering both point estimates and interval estimates, was found to be more effective than the classical approach in estimating the model parameters as well as the reliability function. Full article
Show Figures

Figure 1

22 pages, 327 KiB  
Article
Bayesian Analysis of the Doubly Truncated Zubair-Weibull Distribution: Parameter Estimation, Reliability, Hazard Rate and Prediction
by Zakiah I. Kalantan, Mai A. Hegazy, Abeer A. EL-Helbawy, Hebatalla H. Mohammad, Doaa S. A. Soliman, Gannat R. AL-Dayian and Mervat K. Abd Elaal
Axioms 2025, 14(7), 502; https://doi.org/10.3390/axioms14070502 - 26 Jun 2025
Viewed by 241
Abstract
This paper discusses the Bayesian estimation for the unknown parameters, reliability and hazard rate functions of the doubly truncated Zubair-Weibull distribution. Informative priors (gamma distribution) for the parameters are used to obtain the posterior distributions. Under the squared-error and linear–exponential loss functions, the [...] Read more.
This paper discusses the Bayesian estimation for the unknown parameters, reliability and hazard rate functions of the doubly truncated Zubair-Weibull distribution. Informative priors (gamma distribution) for the parameters are used to obtain the posterior distributions. Under the squared-error and linear–exponential loss functions, the Bayes estimators are derived. Credible intervals for the parameters, reliability and hazard rate functions are obtained. Bayesian prediction (point and interval) for the future observation is considered under the two-sample prediction scheme. A simulation study is performed using the Markov Chain Monte Carlo algorithm of simulation for different sample sizes to assess the performance of the estimators. Two real datasets are applied to show the flexibility and applicability of the distribution. Full article
19 pages, 299 KiB  
Article
A Bayesian Approach to Step-Stress Partially Accelerated Life Testing for a Novel Lifetime Distribution
by Mervat K. Abd Elaal, Hebatalla H. Mohammad, Zakiah I. Kalantan, Abeer A. EL-Helbawy, Gannat R. AL-Dayian, Sara M. Behairy and Reda M. Refaey
Axioms 2025, 14(6), 476; https://doi.org/10.3390/axioms14060476 - 19 Jun 2025
Viewed by 248
Abstract
In lifetime testing, the failure times of highly reliable products under normal usage conditions are often impractically long, making direct reliability assessment impractical. To overcome this, step-stress partially accelerated life testing is employed to reduce testing time while preserving data quality. This paper [...] Read more.
In lifetime testing, the failure times of highly reliable products under normal usage conditions are often impractically long, making direct reliability assessment impractical. To overcome this, step-stress partially accelerated life testing is employed to reduce testing time while preserving data quality. This paper develops a Bayesian model based on Type II censored data, assuming that item lifetimes follow the Topp–Leone inverted Kumaraswamy distribution, a flexible alternative to classical lifetime models due to its ability to capture various hazard rate shapes and to model bounded and skewed lifetime data more effectively than traditional models observed in real-world reliability data. Bayes estimators of the model parameters and acceleration factor are derived under both symmetric (balanced squared error) and asymmetric (balanced linear exponential) loss functions using informative priors. The novelty of this work lies in the integration of the Topp–Leone inverted Kumaraswamy distribution within the Bayesian step-stress partially accelerated life testing framework, which has not been explored previously, offering improved modeling capability for complex lifetime data. The proposed method is validated through comprehensive simulation studies under various censoring schemes, demonstrating robustness and superior estimation performance compared to traditional models. A real-data application involving COVID-19 mortality data further illustrates the practical relevance and improved fit of the model. Overall, the results highlight the flexibility, efficiency, and applicability of the proposed Bayesian approach in reliability analysis. Full article
28 pages, 13036 KiB  
Article
Statistical Analysis of a Generalized Variant of the Weibull Model Under Unified Hybrid Censoring with Applications to Cancer Data
by Mazen Nassar, Refah Alotaibi and Ahmed Elshahhat
Axioms 2025, 14(6), 442; https://doi.org/10.3390/axioms14060442 - 5 Jun 2025
Viewed by 429
Abstract
This paper investigates an understudied generalization of the classical exponential, Rayleigh, and Weibull distributions, known as the power generalized Weibull distribution, particularly in the context of censored data. Characterized by one scale parameter and two shape parameters, the proposed model offers enhanced flexibility [...] Read more.
This paper investigates an understudied generalization of the classical exponential, Rayleigh, and Weibull distributions, known as the power generalized Weibull distribution, particularly in the context of censored data. Characterized by one scale parameter and two shape parameters, the proposed model offers enhanced flexibility for modeling diverse lifetime data patterns and hazard rate behaviors. Notably, its hazard rate function can exhibit five distinct shapes, including upside-down bathtub and bathtub shapes. The study focuses on classical and Bayesian estimation frameworks for the model parameters and associated reliability metrics under a unified hybrid censoring scheme. Methodologies include both point estimation (maximum likelihood and posterior mean estimators) and interval estimation (approximate confidence intervals and Bayesian credible intervals). To evaluate the performance of these estimators, a comprehensive simulation study is conducted under varied experimental conditions. Furthermore, two empirical applications on real-world cancer datasets underscore the efficacy of the proposed estimation methods and the practical viability and flexibility of the explored model compared to eleven other existing lifespan models. Full article
Show Figures

Figure 1

23 pages, 1370 KiB  
Article
Machine Learning-Based Identification of Phonological Biomarkers for Speech Sound Disorders in Saudi Arabic-Speaking Children
by Deema F. Turki and Ahmad F. Turki
Diagnostics 2025, 15(11), 1401; https://doi.org/10.3390/diagnostics15111401 - 31 May 2025
Viewed by 638
Abstract
Background/Objectives: This study investigates the application of machine learning (ML) techniques in diagnosing speech sound disorders (SSDs) in Saudi Arabic-speaking children, with a specific focus on phonological biomarkers, particularly Infrequent Variance (InfrVar), to improve diagnostic accuracy. SSDs are a significant concern in pediatric [...] Read more.
Background/Objectives: This study investigates the application of machine learning (ML) techniques in diagnosing speech sound disorders (SSDs) in Saudi Arabic-speaking children, with a specific focus on phonological biomarkers, particularly Infrequent Variance (InfrVar), to improve diagnostic accuracy. SSDs are a significant concern in pediatric speech pathology, affecting an estimated 10–15% of preschool-aged children worldwide. However, accurate diagnosis remains challenging, especially in linguistically diverse populations. Traditional diagnostic tools, such as the Percentage of Consonants Correct (PCC), often fail to capture subtle phonological variations. This study explores the potential of machine learning models to enhance diagnostic accuracy by incorporating culturally relevant phonological biomarkers like InfrVar, aiming to develop a more effective diagnostic approach for SSDs in Saudi Arabic-speaking children. Methods: Data from 235 Saudi Arabic-speaking children aged 2;6 to 5;11 years were analyzed using several machine learning models: Random Forest, Support Vector Machine (SVM), XGBoost, Logistic Regression, K-Nearest Neighbors, and Naïve Bayes. The dataset was used to classify speech patterns into four categories: Atypical, Typical Development (TD), Articulation, and Delay. Phonological features such as Phonological Variance (PhonVar), InfrVar, and Percentage of Consonants Correct (PCC) were used as key variables. SHapley Additive exPlanations (SHAP) analysis was employed to interpret the contributions of individual features to model predictions. Results: The XGBoost and Random Forest models demonstrated the highest performance, with an accuracy of 91.49% and an AUC of 99.14%. SHAP analysis revealed that articulation patterns and phonological patterns were the most influential features for distinguishing between Atypical and TD categories. The K-Means clustering approach identified four distinct subgroups based on speech development patterns: TD (46.61%), Articulation (25.42%), Atypical (18.64%), and Delay (9.32%). Conclusions: Machine learning models, particularly XGBoost and Random Forest, effectively classified speech development categories in Saudi Arabic-speaking children. This study highlights the importance of incorporating culturally specific phonological biomarkers like InfrVar and PhonVar to improve diagnostic precision for SSDs. These findings lay the groundwork for the development of AI-assisted diagnostic tools tailored to diverse linguistic contexts, enhancing early intervention strategies in pediatric speech pathology. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

Back to TopTop