Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,327)

Search Parameters:
Keywords = bayes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
59 pages, 11250 KiB  
Article
Automated Analysis of Vertebral Body Surface Roughness for Adult Age Estimation: Ellipse Fitting and Machine-Learning Approach
by Erhan Kartal and Yasin Etli
Diagnostics 2025, 15(14), 1794; https://doi.org/10.3390/diagnostics15141794 - 16 Jul 2025
Abstract
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT [...] Read more.
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT images. Methods: CT scans of 176 adults (94 males, 82 females; 21–94 years) were retrospectively analyzed. For each vertebra, the mean orthogonal deviation of the anterior superior endplate from an ideal ellipse was extracted. Sex-specific multiple linear regression served as a baseline; support vector regression (SVR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve-Bayes pseudo-regressor (GNB-R) were tuned with 10-fold cross-validation and evaluated on a 20% hold-out set. Performance was quantified with the standard error of the estimate (SEE). Results: DS values correlated moderately to strongly with age (peak r = 0.60 at L3–L5). Linear regression explained 40% (males) and 47% (females) of age variance (SEE ≈ 11–12 years). Non-parametric learners improved precision: RF achieved an SEE of 8.49 years in males (R2 = 0.47), whereas k-NN attained 10.8 years (R2 = 0.45) in women. Conclusions: Automated analysis of vertebral cortical roughness provides a transparent, observer-independent means of estimating adult age with accuracy approaching that of more complex deep learning pipelines. Streamlining image preparation and validating the approach across diverse populations are the next steps toward forensic adoption. Full article
(This article belongs to the Special Issue New Advances in Forensic Radiology and Imaging)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
Multi-Level Depression Severity Detection with Deep Transformers and Enhanced Machine Learning Techniques
by Nisar Hussain, Amna Qasim, Gull Mehak, Muhammad Zain, Grigori Sidorov, Alexander Gelbukh and Olga Kolesnikova
AI 2025, 6(7), 157; https://doi.org/10.3390/ai6070157 - 15 Jul 2025
Viewed by 180
Abstract
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed [...] Read more.
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed in this study, and posts are classified into four levels: minimum, mild, moderate, and severe. We take a dual approach using classical machine learning (ML) algorithms and recent Transformer-based architectures. For the ML track, we build ten classifiers, including Logistic Regression, SVM, Naive Bayes, Random Forest, XGBoost, Gradient Boosting, K-NN, Decision Tree, AdaBoost, and Extra Trees, with two recently proposed embedding methods, Word2Vec and GloVe embeddings, and we fine-tune them for mental health text classification. Of these, XGBoost yields the highest F1-score of 94.01 using GloVe embeddings. For the deep learning track, we fine-tune ten Transformer models, covering BERT, RoBERTa, XLM-RoBERTa, MentalBERT, BioBERT, RoBERTa-large, DistilBERT, DeBERTa, Longformer, and ALBERT. The highest performance was achieved by the MentalBERT model, with an F1-score of 97.31, followed by RoBERTa (96.27) and RoBERTa-large (96.14). Our results demonstrate that, to the best of the authors’ knowledge, domain-transferred Transformers outperform non-Transformer-based ML methods in capturing subtle linguistic cues indicative of different levels of depression, thereby highlighting their potential for fine-grained mental health monitoring in online settings. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 106
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

29 pages, 1234 KiB  
Article
Automatic Detection of the CaRS Framework in Scholarly Writing Using Natural Language Processing
by Olajide Omotola, Nonso Nnamoko, Charles Lam, Ioannis Korkontzelos, Callum Altham and Joseph Barrowclough
Electronics 2025, 14(14), 2799; https://doi.org/10.3390/electronics14142799 - 11 Jul 2025
Viewed by 241
Abstract
Many academic introductions suffer from inconsistencies and a lack of comprehensive structure, often failing to effectively outline the core elements of the research. This not only impacts the clarity and readability of the article but also hinders the communication of its significance and [...] Read more.
Many academic introductions suffer from inconsistencies and a lack of comprehensive structure, often failing to effectively outline the core elements of the research. This not only impacts the clarity and readability of the article but also hinders the communication of its significance and objectives to the intended audience. This study aims to automate the CaRS (Creating a Research Space) model using machine learning and natural language processing techniques. We conducted a series of experiments using a custom-developed corpus of 50 biology research article introductions, annotated with rhetorical moves and steps. The dataset was used to evaluate the performance of four classification algorithms: Prototypical Network (PN), Support Vector Machines (SVM), Naïve Bayes (NB), and Random Forest (RF); in combination with six embedding models: Word2Vec, GloVe, BERT, GPT-2, Llama-3.2-3B, and TEv3-small. Multiple experiments were carried out to assess performance at both the move and step levels using 5-fold cross-validation. Evaluation metrics included accuracy and weighted F1-score, with comprehensive results provided. Results show that the SVM classifier, when paired with Llama-3.2-3B embeddings, consistently achieved the highest performance across multiple tasks when trained on preprocessed dataset, with 79% accuracy and weighted F1-score on rhetorical moves and strong results on M2 steps (75% accuracy and weighted F1-score). While other combinations showed promise, particularly NB and RF with newer embeddings, none matched the consistency of the SVM–Llama pairing. Compared to existing benchmarks, our model achieves similar or better performance; however, direct comparison is limited due to differences in datasets and experimental setups. Despite the unavailability of the benchmark dataset, our findings indicate that SVM is an effective choice for rhetorical classification, even in few-shot learning scenarios. Full article
Show Figures

Figure 1

18 pages, 2591 KiB  
Article
The Impact of Compound Drought and Heatwave Events on the Gross Primary Productivity of Rubber Plantations
by Qinggele Bao, Ziqin Wang and Zhongyi Sun
Forests 2025, 16(7), 1146; https://doi.org/10.3390/f16071146 - 11 Jul 2025
Viewed by 195
Abstract
Global climate change has increased the frequency of compound drought–heatwave events (CDHEs), seriously threatening tropical forest ecosystems. However, due to the complex structure of natural tropical forests, related research remains limited. To address this, we focused on rubber plantations on Hainan Island, which [...] Read more.
Global climate change has increased the frequency of compound drought–heatwave events (CDHEs), seriously threatening tropical forest ecosystems. However, due to the complex structure of natural tropical forests, related research remains limited. To address this, we focused on rubber plantations on Hainan Island, which have simpler structures, to explore the impacts of CDHEs on their primary productivity. We used Pearson and Spearman correlation analyses to select the optimal combination of drought and heatwave indices. Then, we constructed a Compound Drought–Heatwave Index (CDHI) using Copula functions to describe the temporal patterns of CDHEs. Finally, we applied a Bayes–Copula conditional probability model to estimate the probability of GPP loss under CDHE conditions. The main findings are as follows: (1) The Standardized Precipitation Evapotranspiration Index (SPEI-3) and Standardized Temperature Index (STI-1) formed the best index combination. (2) The CDHI successfully identified typical CDHEs in 2001, 2003–2005, 2010, 2015–2016, and 2020. (3) Temporally, CDHEs significantly increased the probability of GPP loss in April and May (0.58 and 0.64, respectively), while the rainy season showed a reverse trend due to water buffering (lowest in October, at 0.19). (4) Spatially, the northwest region showed higher GPP loss probabilities, likely due to topographic uplift. This study reveals how tropical plantations respond to compound climate extremes and provides theoretical support for the monitoring and management of tropical ecosystems. Full article
(This article belongs to the Section Forest Meteorology and Climate Change)
Show Figures

Figure 1

38 pages, 2956 KiB  
Review
The Use of Selected Machine Learning Methods in Dairy Cattle Farming: A Review
by Wilhelm Grzesiak, Daniel Zaborski, Marcin Pluciński, Magdalena Jędrzejczak-Silicka, Renata Pilarczyk and Piotr Sablik
Animals 2025, 15(14), 2033; https://doi.org/10.3390/ani15142033 - 10 Jul 2025
Viewed by 188
Abstract
The aim of this review was to present selected machine learning (ML) algorithms used in dairy cattle farming in recent years (2020–2024). A description of ML methods (linear and logistic regression, classification and regression trees, chi-squared automatic interaction detection, random forest, AdaBoost, support [...] Read more.
The aim of this review was to present selected machine learning (ML) algorithms used in dairy cattle farming in recent years (2020–2024). A description of ML methods (linear and logistic regression, classification and regression trees, chi-squared automatic interaction detection, random forest, AdaBoost, support vector machines, k-nearest neighbors, naive Bayes classifier, multivariate adaptive regression splines, artificial neural networks, including deep neural networks and convolutional neural networks, as well as Gaussian mixture models and cluster analysis), with some examples of their application in various aspects of dairy cattle breeding and husbandry, is provided. In addition, the stages of model construction and implementation, as well as the performance indicators for regression and classification models, are described. Finally, time trends in the popularity of ML methods in dairy cattle farming are briefly discussed. Full article
(This article belongs to the Special Issue Machine Learning Methods and Statistics in Ruminant Farming)
Show Figures

Figure 1

25 pages, 2297 KiB  
Article
Detecting Fake News in Urdu Language Using Machine Learning, Deep Learning, and Large Language Model-Based Approaches
by Muhammad Shoaib Farooq, Syed Muhammad Asadullah Gilani, Muhammad Faraz Manzoor and Momina Shaheen
Information 2025, 16(7), 595; https://doi.org/10.3390/info16070595 - 10 Jul 2025
Viewed by 164
Abstract
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in [...] Read more.
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in Urdu is difficult because there are not many effective systems for this. This study aims to solve this problem by creating a detailed process and training models using machine learning, deep learning, and large language models (LLMs). The research uses methods that look at the features of documents and classes to detect fake news in Urdu. Different models were tested, including machine learning models like Naïve Bayes and Support Vector Machine (SVM), as well as deep learning models like Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), which used embedding techniques. The study also used advanced models like BERT and GPT to improve the detection process. These models were first evaluated on the Bend-the-Truth dataset, where CNN achieved an F1 score of 72%, Naïve Bayes scored 78%, and the BERT Transformer achieved the highest F1 score of 79% on Bend the Truth dataset. To further validate the approach, the models were tested on a more diverse dataset, Ax-to-Grind, where both SVM and LSTM achieved an F1 score of 89%, while BERT outperformed them with an F1 score of 93%. Full article
Show Figures

Figure 1

19 pages, 1039 KiB  
Article
Prediction of Parkinson Disease Using Long-Term, Short-Term Acoustic Features Based on Machine Learning
by Mehdi Rashidi, Serena Arima, Andrea Claudio Stetco, Chiara Coppola, Debora Musarò, Marco Greco, Marina Damato, Filomena My, Angela Lupo, Marta Lorenzo, Antonio Danieli, Giuseppe Maruccio, Alberto Argentiero, Andrea Buccoliero, Marcello Dorian Donzella and Michele Maffia
Brain Sci. 2025, 15(7), 739; https://doi.org/10.3390/brainsci15070739 - 10 Jul 2025
Viewed by 288
Abstract
Background: Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease, affecting countless individuals worldwide. PD is characterized by the onset of a marked motor symptomatology in association with several non-motor manifestations. The clinical phase of the disease is usually [...] Read more.
Background: Parkinson’s disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease, affecting countless individuals worldwide. PD is characterized by the onset of a marked motor symptomatology in association with several non-motor manifestations. The clinical phase of the disease is usually preceded by a long prodromal phase, devoid of overt motor symptomatology but often showing some conditions such as sleep disturbance, constipation, anosmia, and phonatory changes. To date, speech analysis appears to be a promising digital biomarker to anticipate even 10 years before the onset of clinical PD, as well serving as a useful prognostic tool for patient follow-up. That is why, the voice can be nominated as the non-invasive method to detect PD from healthy subjects (HS). Methods: Our study was based on cross-sectional study to analysis voice impairment. A dataset comprising 81 voice samples (41 from healthy individuals and 40 from PD patients) was utilized to train and evaluate common machine learning (ML) models using various types of features, including long-term (jitter, shimmer, and cepstral peak prominence (CPP)), short-term features (Mel-frequency cepstral coefficient (MFCC)), and non-standard measurements (pitch period entropy (PPE) and recurrence period density entropy (RPDE)). The study adopted multiple machine learning (ML) algorithms, including random forest (RF), K-nearest neighbors (KNN), decision tree (DT), naïve Bayes (NB), support vector machines (SVM), and logistic regression (LR). Cross-validation technique was applied to ensure the reliability of performance metrics on train and test subsets. These metrics (accuracy, recall, and precision), help determine the most effective models for distinguishing PD from healthy subjects. Result: Among all the algorithms used in this research, random forest (RF) was the best-performing model, achieving an accuracy of 82.72% with a ROC-AUC score of 89.65%. Although other models, such as support vector machine (SVM), could be considered with an accuracy of 75.29% and a ROC-AUC score of 82.63%, RF was by far the best one when evaluated across all metrics. The K-nearest neighbor (KNN) and decision tree (DT) performed the worst. Notably, by combining a comprehensive set of long-term, short-term, and non-standard acoustic features, unlike previous studies that typically focused on only a subset, our study achieved higher predictive performance, offering a more robust model for early PD detection. Conclusions: This study highlights the potential of combining advanced acoustic analysis with ML algorithms to develop non-invasive and reliable tools for early PD detection, offering substantial benefits for the healthcare sector. Full article
(This article belongs to the Section Neurodegenerative Diseases)
Show Figures

Figure 1

18 pages, 1760 KiB  
Article
Integrating 68Ga-PSMA-11 PET/CT with Clinical Risk Factors for Enhanced Prostate Cancer Progression Prediction
by Joanna M. Wybranska, Lorenz Pieper, Christian Wybranski, Philipp Genseke, Jan Wuestemann, Julian Varghese, Michael C. Kreissl and Jakub Mitura
Cancers 2025, 17(14), 2285; https://doi.org/10.3390/cancers17142285 - 9 Jul 2025
Viewed by 293
Abstract
Background/Objectives: This study evaluates whether combining 68Ga-PSMA-11-PET/CT derived imaging biomarkers with clinical risk factors improves the prediction of early biochemical recurrence (eBCR) or clinical progress in patients with high-risk prostate cancer (PCa) after primary treatment, using machine learning (ML) models. Methods: We [...] Read more.
Background/Objectives: This study evaluates whether combining 68Ga-PSMA-11-PET/CT derived imaging biomarkers with clinical risk factors improves the prediction of early biochemical recurrence (eBCR) or clinical progress in patients with high-risk prostate cancer (PCa) after primary treatment, using machine learning (ML) models. Methods: We analyzed data from 93 high-risk PCa patients who underwent 68Ga-PSMA-11 PET/CT and received primary treatment at a single center. Two predictive models were developed: a logistic regression (LR) model and an ML derived probabilistic graphical model (PGM) based on a naïve Bayes framework. Both models were compared against each other and against the CAPRA risk score. The models’ input variables were selected based on statistical analysis and domain expertise including a literature review and expert input. A decision tree was derived from the PGM to translate its probabilistic reasoning into a transparent classifier. Results: The five key input variables were as follows: binarized CAPRA score, maximal intraprostatic PSMA uptake intensity (SUVmax), presence of bone metastases, nodal involvement at common iliac bifurcation, and seminal vesicle infiltration. The PGM achieved superior predictive performance with a balanced accuracy of 0.73, sensitivity of 0.60, and specificity of 0.86, substantially outperforming both the LR (balanced accuracy: 0.50, sensitivity: 0.00, specificity: 1.00) and CAPRA (balanced accuracy: 0.59, sensitivity: 0.20, specificity: 0.99). The decision tree provided an explainable classifier with CAPRA as a primary branch node, followed by SUVmax and specific PET-detected tumor sites. Conclusions: Integrating 68Ga-PSMA-11 imaging biomarkers with clinical parameters, such as CAPRA, significantly improves models to predict progression in patients with high-risk PCa undergoing primary treatment. The PGM offers superior balanced accuracy and enables risk stratification that may guide personalized treatment decisions. Full article
Show Figures

Figure 1

14 pages, 1182 KiB  
Article
Endocranial Morphology in Metopism
by Silviya Nikolova, Diana Toneva and Gennady Agre
Biology 2025, 14(7), 835; https://doi.org/10.3390/biology14070835 - 9 Jul 2025
Viewed by 131
Abstract
Comparative investigations on homogenous cranial series have demonstrated that metopism is linked to a specific configuration of the cranial vault; however, there are no comparative data concerning the endocranial morphology in this condition. This study aimed to compare the endocranial space in metopic [...] Read more.
Comparative investigations on homogenous cranial series have demonstrated that metopism is linked to a specific configuration of the cranial vault; however, there are no comparative data concerning the endocranial morphology in this condition. This study aimed to compare the endocranial space in metopic and control crania using morphometric analysis and machine learning algorithms. For this purpose, a series of 230 (184 control and 46 metopic) dry crania of contemporary adult Bulgarian males were scanned using an industrial µCT system. The 3D coordinates of 47 landmarks were collected on the endocranial surface. All possible measurements between the landmarks were calculated as Euclidean distances. The resultant 1081 measurements represented the initial dataset, which was reduced to smaller datasets applying different criteria. The derived datasets were used for learning a set of classification models by machine learning algorithms. The morphometric analysis showed that in the metopic crania some segments of the anterior and middle cranial fossae were significantly longer, and the landmark endobregma was significantly closer to the anterior and middle sections of the cranial base. The most accurate model, with a classification accuracy of 85%, was the Naive Bayes one learned on a dataset of 69 attributes assembled after an attribute selection procedure. Full article
(This article belongs to the Section Medical Biology)
Show Figures

Figure 1

13 pages, 1292 KiB  
Article
Impact of Sex on Rehospitalization Rates and Mortality of Patients with Heart Failure with Preserved Ejection Fraction: Differences Between an Analysis Stratified by Sex and a Global Analysis
by Victoria Cendrós, Mar Domingo, Elena Navas, Miguel Ángel Muñoz, Antoni Bayés-Genís and José María Verdú-Rotellar
J. Pers. Med. 2025, 15(7), 297; https://doi.org/10.3390/jpm15070297 - 8 Jul 2025
Viewed by 303
Abstract
Background: Differences in the prognosis and associated factors in patients with heart failure with a preserved fraction (HFpEF) according to sex remain uncertain. Objective: The objective was to determine the relevance of sex-stratified predictive models in determining prognosis in HFpEF patients. Methods: The [...] Read more.
Background: Differences in the prognosis and associated factors in patients with heart failure with a preserved fraction (HFpEF) according to sex remain uncertain. Objective: The objective was to determine the relevance of sex-stratified predictive models in determining prognosis in HFpEF patients. Methods: The study was a retrospective, multicenter study of patients previously hospitalized with ejection fraction ≥ 50% (HFpEF) using data from the SIDIAP database. The endpoints were mortality and rehospitalization. Predictive models were performed. Results: We identified 2895 patients with HFpEF who were 57% female, with a mean age of 77 (standard deviation [SD] 9.7) years and a median follow-up of 2.0 (IQR 1.0–9.0) years. In the overall analysis, male sex was associated with a higher risk of mortality (HR 1.26, 95% CI 1.06–1.49, p = 0.008) and rehospitalization (HR 1.14, 95% CI 1.03–1.33, p = 0.04). After sex stratification, the mortality rates per 1000 patient years were 10.40 (95% CI 9.34–11.46) in men and 10.21 (95% CI 9.30–11.11) in women (p = 0.7), and the rehospitalization rates were 17.11 (95% CI 16.63–18.58) in men and 17.29 (95% CI 16.01–18.57) in women (p = 0.23). In men, the factors related to mortality were age (hazard ratio [HR] 3.14, 95% confidence interval [CI] 2.43–4.06), and hemoglobin (0.84, 0.79–0.89), while in women, they were age (HR 2.92, 95% CI 2.17–3.92), BMI < 30 kg/m2 (1.7, 1.37–2.11), diuretics (1.46, 1.11–1.94), and a Charlson > 2 (1.86, 1.02–3.38). Rehospitalization in men was associated with age (HR 1.58, 95% CI 1.23–2.02), BMI < 30 kg/m2 (0.75, 0.58–0.95), atrial fibrillation (1.36, 1.07–1.73), hemoglobin (0.91, 0.87–0.95), and coronary disease (1.35, 1.01–1.81). In women, the factors were age (HR 1.33, 95% CI 1.0–1.64), atrial fibrillation (1.57, 1.30–1.91), hemoglobin (0.86, 0.80–0.92), and diuretics (1.37, 1.08–1.73). Conclusions: Non-stratified analyses underestimate the poor prognosis in women with HFpEF. Future studies should include analyses stratified by sex. Full article
(This article belongs to the Section Sex, Gender and Hormone Based Medicine)
Show Figures

Figure 1

18 pages, 359 KiB  
Article
On the Decision-Theoretic Foundations and the Asymptotic Bayes Risk of the Region of Practical Equivalence for Testing Interval Hypotheses
by Riko Kelter
Stats 2025, 8(3), 56; https://doi.org/10.3390/stats8030056 - 8 Jul 2025
Viewed by 111
Abstract
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian [...] Read more.
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian paradigm, two popular approaches exist: The first is the region of practical equivalence (ROPE), which has become increasingly popular in the cognitive sciences. The second is the Bayes factor for interval null hypotheses, which was proposed by Morey et al. One advantage of the ROPE procedure is that, in contrast to the Bayes factor, it is quite robust to the prior specification. However, while the ROPE is conceptually appealing, it lacks a clear decision-theoretic foundation like the Bayes factor. In this paper, a decision-theoretic justification for the ROPE procedure is derived for the first time, which shows that the Bayes risk of a decision rule based on the highest-posterior density interval (HPD) and the ROPE is asymptotically minimized for increasing sample size. To show this, a specific loss function is introduced. This result provides an important decision-theoretic justification for testing the interval hypothesis in the Bayesian approach based on the ROPE and HPD, in particular, when sample size is large. Full article
(This article belongs to the Section Bayesian Methods)
Show Figures

Figure 1

16 pages, 1037 KiB  
Article
Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes
by Xinhai Li, Chenxu Meng, Heng Zhou, Yi Guo, Bowen Xue, Tianzuo Yu and Yunan Lu
Electronics 2025, 14(13), 2736; https://doi.org/10.3390/electronics14132736 - 7 Jul 2025
Viewed by 166
Abstract
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is [...] Read more.
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is ubiquitous in real-world applications due to the annotator subjectivity, algorithmic biases, and experimental errors. Existing related LDL algorithms often assume a linear combination of true and random label distributions when modeling the noisy label distributions, an oversimplification that fails to capture the practical generation processes of noisy label distributions. Therefore, this paper introduces an assumption that the noise in label distributions primarily arises from the semantic confusion between labels and proposes a novel generative label distribution learning algorithm to model the confusion-based generation process of both the feature data and the noisy label distribution data. The proposed model is inferred using variational methods and its effectiveness is demonstrated through extensive experiments across various real-world datasets, showcasing its superiority in handling noisy label distributions. Full article
(This article belongs to the Special Issue Neural Networks: From Software to Hardware)
Show Figures

Graphical abstract

7 pages, 259 KiB  
Perspective
Internal Quality Control in Medical Laboratories: Westgard and the Others
by Marco Pradella
Laboratories 2025, 2(3), 15; https://doi.org/10.3390/laboratories2030015 - 5 Jul 2025
Viewed by 198
Abstract
This review recalls some ISO 15189:2022 requirements for the management of examination results and emerging alternatives to internal quality control (IQC) in relation to Italian Society of Clinical Pathology and Laboratory Medicine (SIPMeL) Recommendation Q19. We observed phenomena of contrasting “metrological”, or rather [...] Read more.
This review recalls some ISO 15189:2022 requirements for the management of examination results and emerging alternatives to internal quality control (IQC) in relation to Italian Society of Clinical Pathology and Laboratory Medicine (SIPMeL) Recommendation Q19. We observed phenomena of contrasting “metrological”, or rather “tracealogic”, and “statistical” approaches. SIPMeL Recommendation Q19 enhances IQC with a moving average based on ISO 15189, which enables the use of the moving average of patient sample results (MA). In the veterinary field, the procedure of QC with repeat testing on patient samples (RPT-QC) has met with some success. The “Bayesian approach” of IQC makes use of the distinction between a priori probability, evidential probability (data) and a posteriori probability (IQC rules). SIPMeL Recommendation Q19 strictly adheres to the ISO 15189:2022 document. SIPMeL Q19 calls for abandoning the 1–2 s rule, using appropriate computer tools, not only control charts, and trying to reduce false positives to very low frequencies. Alternatives to IQC using patient results and the Bayesian approach are compatible with ISO 15189 and SIPMeL Q19. In contrast, the alternative using material designed for traceability with assigned values, is not compatible with the ISO standard. Full article
Show Figures

Figure 1

26 pages, 4907 KiB  
Article
A Novel Approach Utilizing Bagging, Histogram Gradient Boosting, and Advanced Feature Selection for Predicting the Onset of Cardiovascular Diseases
by Norma Latif Fitriyani, Muhammad Syafrudin, Nur Chamidah, Marisa Rifada, Hendri Susilo, Dursun Aydin, Syifa Latif Qolbiyani and Seung Won Lee
Mathematics 2025, 13(13), 2194; https://doi.org/10.3390/math13132194 - 4 Jul 2025
Viewed by 216
Abstract
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study [...] Read more.
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study leverages three preprocessed cardiovascular datasets, employing the Local Outlier Factor technique for outlier removal and the information gain method for feature selection. Through rigorous experimentation, the proposed model demonstrates superior performance compared to conventional machine learning approaches, such as Logistic Regression, Support Vector Classification, Gaussian Naïve Bayes, Multi-Layer Perceptron, k-nearest neighbors, Random Forest, AdaBoost, gradient boosting, and histogram gradient boosting. Evaluation metrics, including precision, recall, F1 score, accuracy, and AUC, yielded impressive results: 93.90%, 98.83%, 96.30%, 96.25%, and 0.9916 for dataset I; 94.17%, 99.05%, 96.54%, 96.48%, and 0.9931 for dataset II; and 89.81%, 82.40%, 85.91%, 86.66%, and 0.9274 for dataset III. The findings indicate that the proposed prediction model has the potential to facilitate early CVD detection, thereby enhancing preventive strategies and improving patient outcomes. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

Back to TopTop