Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,148)

Search Parameters:
Keywords = imbalance learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3226 KB  
Article
A Detection and Recognition Method for Interference Signals Based on Radio Frequency Fingerprint Characteristics
by Yang Guo and Yuan Gao
Electronics 2026, 15(7), 1393; https://doi.org/10.3390/electronics15071393 - 27 Mar 2026
Abstract
With the advancement of 5G and the Internet of Things (IoT), traditional upper-layer authentication mechanisms are vulnerable to attacks, while quantum computing threatens cryptographic security. Radio frequency fingerprint identification (RFFI) offers a physical-layer solution by exploiting inherent hardware imperfections. However, in complex electromagnetic [...] Read more.
With the advancement of 5G and the Internet of Things (IoT), traditional upper-layer authentication mechanisms are vulnerable to attacks, while quantum computing threatens cryptographic security. Radio frequency fingerprint identification (RFFI) offers a physical-layer solution by exploiting inherent hardware imperfections. However, in complex electromagnetic environments, narrowband and especially agile interference (characterized by low power and narrow bandwidth) can severely distort fingerprint features, rendering conventional detection algorithms ineffective. To address this challenge, this paper proposes a novel interference detection framework tailored for Orthogonal Frequency Division Multiplexing (OFDM) systems. First, a signal transmission model incorporating non-ideal hardware characteristics (e.g., DC offset, I/Q imbalance) is established. Based on this model, we design an agile interference detection algorithm comprising two key components: (1) a time-series anomaly detection method that fuses multi-domain expert features (fractal, complexity, and high-order statistics) with machine learning, demonstrating superior performance over the traditional CME algorithm under narrowband interference, and (2) a progressive search segmental detection algorithm that, combined with reconstruction error features extracted by an autoencoder, effectively identifies low-power agile interference by appropriately trading-off computation time for detection sensitivity. Finally, an OFDM simulation platform is developed to validate the proposed methods. The results show that the segmental detection algorithm achieves reliable detection at a jammer-to-signal ratio (JSR) as low as −10 dB, significantly outperforming existing approaches and enhancing the robustness of RFFI in challenging interference environments. Full article
Show Figures

Figure 1

17 pages, 3154 KB  
Article
Unveiling Key Biomarkers of Cardiovascular Risk in Psoriasis Through Explainable Artificial Intelligence
by Hasan Ucuzal and Mehmet Kıvrak
Biology 2026, 15(7), 532; https://doi.org/10.3390/biology15070532 - 26 Mar 2026
Abstract
Psoriasis patients face a significantly elevated risk of cardiovascular diseases (CVD), necessitating early and accurate risk prediction tools. This study developed and validated a machine learning model to predict CVD risk in psoriasis patients using clinical and biochemical data from 2685 individuals. After [...] Read more.
Psoriasis patients face a significantly elevated risk of cardiovascular diseases (CVD), necessitating early and accurate risk prediction tools. This study developed and validated a machine learning model to predict CVD risk in psoriasis patients using clinical and biochemical data from 2685 individuals. After preprocessing and addressing class imbalance with SMOTE-NC, six machine learning models (Logistic Regression as baseline, XGBoost, LightGBM, CatBoost, GradientBoosting, AdaBoost) were evaluated using a completely leak-free nested cross-validation framework (outer k = 10, inner k = 3) with randomized hyperparameter search (n_iter = 50). Feature selection via the Boruta algorithm was performed separately within each training fold to prevent data leakage. The Boruta algorithm identified 21 key predictors, including age, systolic blood pressure (SBP), apolipoprotein B (apoB), fasting blood glucose (FBG), and complement C1q. CatBoost emerged as the top-performing model (OOF ROC-AUC = 0.908, 95% CI [0.892–0.924]; PR-AUC = 0.509, 95% CI [0.448–0.578]; F1 = 0.540; MCC = 0.498; Brier = 0.078), while the Logistic Regression baseline achieved ROC-AUC = 0.909 but was eliminated due to poor calibration (Brier = 0.114 > 0.10). All metrics were evaluated with 95% bootstrap confidence intervals (n = 1000 iterations). Explainable AI techniques (SHAP, LIME, Anchors) revealed that older age, elevated SBP, and metabolic dysregulation (e.g., high apoB, FBG) were the strongest CVD predictors. Local explanations were provided for five representative patients (high-risk, low-risk, and randomly selected), rather than a single instance, to better characterize model stability. Limitations include the single-center, retrospective design and lack of external validation. Future work should incorporate multi-ethnic cohorts and advanced biomarkers (e.g., genetic, imaging data) to enhance generalizability. This study demonstrates the potential of explainable AI to improve CVD risk stratification in psoriasis patients, offering a scalable tool for preventive cardiology. Full article
Show Figures

Figure 1

22 pages, 2649 KB  
Article
A Bayesian-Optimized XGBoost Approach for Money Laundering Risk Prediction in Financial Transactions
by Zihao Zuo, Yang Jiang, Rui Liang, Jiabin Xu, Hong Jiang, Shizhuo Zhang, Yunkai Chen and Yanhong Peng
Information 2026, 17(4), 324; https://doi.org/10.3390/info17040324 - 26 Mar 2026
Abstract
The rapid expansion of global commerce has escalated the complexity of money laundering schemes, making the detection of illicit transfers an urgent but highly challenging research problem. In operational anti-money laundering (AML) systems, the extreme rarity of illicit transactions often overwhelms compliance teams [...] Read more.
The rapid expansion of global commerce has escalated the complexity of money laundering schemes, making the detection of illicit transfers an urgent but highly challenging research problem. In operational anti-money laundering (AML) systems, the extreme rarity of illicit transactions often overwhelms compliance teams with false positives, leading to severe “alert fatigue.” To address this critical bottleneck, this paper introduces an enhanced, probability-driven risk-prioritization framework utilizing an XGBoost classifier integrated with Bayesian Optimization (BO-XGBoost). By optimizing directly for the Area Under the Precision–Recall Curve (PR-AUC), the model is specifically tailored to rank high-risk anomalies under severe class imbalance. We validate the proposed approach on a rigorously resampled transaction dataset simulating a realistic 5% laundering rate. The BO-XGBoost model demonstrates exceptional prioritization capability, achieving an ROC-AUC of 0.9686 and a PR-AUC of 0.7253. Most notably, it attains a near-perfect Precision@1%, meaning the top 1% of flagged transactions are 100% true illicit activities, entirely eliminating false positives at the highest priority tier. Comparative and SHAP-based interpretability analyses confirm that BO-XGBoost easily outperforms sequence-heavy deep learning baselines. Crucially, it matches computationally expensive stacking ensembles in peak predictive precision while significantly surpassing them in operational efficiency, indicating its immense promise for resource-optimized, real-world compliance screening. Full article
(This article belongs to the Special Issue Information Management and Decision-Making)
Show Figures

Graphical abstract

33 pages, 15024 KB  
Article
HFA-Net: Explainable Multi-Scale Deep Learning Framework for Illumination-Invariant Plant Disease Diagnosis in Precision Agriculture
by Muhammad Hassaan Ashraf, Farhana Jabeen, Muhammad Waqar and Ajung Kim
Sensors 2026, 26(7), 2067; https://doi.org/10.3390/s26072067 - 26 Mar 2026
Abstract
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, [...] Read more.
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, their performance often drops under lesion scale variability, inter- and intra-class similarity among diseases, class imbalance, and illumination fluctuations. To overcome these challenges, we propose a Heterogeneous Feature Aggregation Network (HFA-Net) that brings together architectural improvements, illumination-aware preprocessing, and training-level enhancements into a single cohesive framework. To extract richer and more discriminative features from the early layers of the network, HFA-Net introduces a multi-scale, multi-level feature aggregation stem. The Reduction-Expansion (RE) mechanism helps preserve important lesion details while adapting to variations in scale. Considering real agricultural environments, an Illumination-Adaptive Contrast Enhancement (IACE) preprocessing pipeline is designed to address illumination variability in real agricultural environments. Experimental results show that HFA-Net achieves 96.03% accuracy under normal conditions and maintains strong performance under challenging lighting scenarios, achieving 92.95% and 93.07% accuracy in extremely dark and bright environments, respectively. Furthermore, quantitative explainability analysis using perturbation-based metrics demonstrates that the model’s predictions are not only accurate but also faithful to disease-relevant regions. Finally, Grad-CAM-based visual explanations confirm that the model’s predictions are driven by disease-specific regions, enhancing interpretability and practical reliability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

33 pages, 783 KB  
Systematic Review
A Systematic Review of Deep Learning Approaches for Hepatopancreatic Tumor Segmentation
by Razeen Hussain, Muhammad Mohsin, Dadan Khan and Mohammad Zohaib
J. Imaging 2026, 12(4), 147; https://doi.org/10.3390/jimaging12040147 - 26 Mar 2026
Abstract
Deep learning has advanced rapidly in medical image segmentation, yet hepatopancreatic tumor delineation remains challenging due to low contrast, small lesion size, organ variability, and limited high-quality annotations. Existing reviews are outdated or overly broad, leaving recent architectural developments, training strategies, and dataset [...] Read more.
Deep learning has advanced rapidly in medical image segmentation, yet hepatopancreatic tumor delineation remains challenging due to low contrast, small lesion size, organ variability, and limited high-quality annotations. Existing reviews are outdated or overly broad, leaving recent architectural developments, training strategies, and dataset limitations insufficiently synthesized. To address this gap, we conducted a PRISMA 2020 systematic literature review of studies published between 2021 and 2026 on deep learning-based liver and pancreatic tumor segmentation. From 2307 records, 84 studies met inclusion criteria. U-Net variants continue to dominate, achieving strong liver segmentation but inconsistent tumor accuracy, while transformer-based and hybrid models improve global context modeling at higher computational cost. Attention mechanisms, boundary-refinement modules, and semi-supervised learning offer incremental gains, yet pancreatic tumor segmentation remains notably difficult. Persistent issues, including domain shift, class imbalance, and limited generalization across datasets, underscore the need for more robust architectures, standardized benchmarks, and clinically oriented evaluation. This review consolidates recent progress and highlights key challenges that must be addressed to advance reliable hepatopancreatic tumor segmentation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

25 pages, 3151 KB  
Article
FCR-TransUNet: A Novel Approach to Crop Classification in Remote Sensing Images Employing Attention and Feature Enhancement Techniques
by Yongqi Han, Xingtong Liu, Yun Zhang, Hongfu Ai, Chuan Qin and Xinle Zhang
Agriculture 2026, 16(7), 727; https://doi.org/10.3390/agriculture16070727 - 25 Mar 2026
Abstract
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, [...] Read more.
Accurate crop classification is critical for optimizing agricultural resource use and informing production decisions. Deep learning, with its robust feature extraction ability, has become a prevalent technique for remote sensing-based crop classification. However, agricultural landscape complexity poses three key challenges: background noise interference, class confusion from inter-crop spectral similarity, and blurred small-area crop boundaries due to class imbalance. This paper proposes FCR-TransUNet, a TransUNet-based enhanced model integrating three modules: Feature Enhancement Module (FEM) for noise filtering, Class-Attention (CAExperimental results on the Youyi Farm and barley datasets validate the superiority of the proposed model. On the Youyi Farm dataset, FCR-TransUNet achieves an MIoU of 92.2%, representing an improvement of 1.8% over SAM2-UNet and 2.9% over the baseline TransUNet. On the barley dataset, it yields an MIoU of 89.9%. Ablation studies further verify the effectiveness of each designed module. To comprehensively evaluate the classification performance of FCR-TransUNet across the full crop growth cycle, experiments were conducted using remote sensing images from May, July, and August, respectively. The results demonstrate that FCR-TransUNet exhibits strong stability and adaptability at different crop growth stages, providing a reliable solution for precision agriculture and intelligent agricultural production. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

40 pages, 9093 KB  
Article
Simulation-Guided Interpretable Fault Diagnosis of Hydraulic Directional Control Valves Under Limited Fault Data Conditions
by Yuxuan Xia, Aiping Xiao, Huafei Xiao, Xiangyi Zhao and Huijun Liu
Sensors 2026, 26(7), 2052; https://doi.org/10.3390/s26072052 - 25 Mar 2026
Abstract
Delayed switching faults in hydraulic directional control valves can significantly degrade system performance and reliability, yet their diagnosis remains challenging due to complex fault mechanisms and coupled sensor responses and limited fault samples in industrial applications. While data-driven approaches, including deep learning-based methods, [...] Read more.
Delayed switching faults in hydraulic directional control valves can significantly degrade system performance and reliability, yet their diagnosis remains challenging due to complex fault mechanisms and coupled sensor responses and limited fault samples in industrial applications. While data-driven approaches, including deep learning-based methods, have shown promising performance in fault diagnosis, their practical deployment in industrial quality inspection and condition monitoring is often constrained by limited fault data availability and insufficient physical interpretability of the diagnostic results. In this study, an interpretable fault diagnosis framework for delayed switching faults in hydraulic directional control valves is proposed based on a simulation-guided feature construction method and multi-pressure signal analysis. Instead of using simulation to generate synthetic training data, a physical simulation model is employed to analyze fault mechanisms and to guide the design of valve-level diagnostic features derived from inter-sensor pressure differences. These features are further evaluated using several classical machine learning classifiers, including RF, SVM, KNN, and LR under conditions of limited fault samples. Experimental results demonstrate that the proposed method effectively captures the structural imbalance caused by internal valve faults and achieves high diagnostic accuracy and robustness compared with conventional single-sensor approaches and purely data-driven black-box models. The proposed framework provides a practical and physically interpretable solution for hydraulic valve fault diagnosis under small-sample conditions and offers potential value for industrial quality inspection and maintenance applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

20 pages, 1442 KB  
Article
FedTheftDetect: Optimizing Anomaly Detection in Smart Grid Metering Systems Using Federated Learning
by Samar M. Nour, Ahmed Rady, Mohammed S. Hussien, Sameh A. Salem and Samar A. Said
Computers 2026, 15(4), 202; https://doi.org/10.3390/computers15040202 - 25 Mar 2026
Viewed by 53
Abstract
The detection of anomaly energy consumption patterns in smart grid metering systems remains a critical issue. This is due to data imbalance, privacy constraints, and the dynamic nature of consumption patterns. To address these concerns, we present a privacy-preserving and scalable anomaly detection [...] Read more.
The detection of anomaly energy consumption patterns in smart grid metering systems remains a critical issue. This is due to data imbalance, privacy constraints, and the dynamic nature of consumption patterns. To address these concerns, we present a privacy-preserving and scalable anomaly detection framework named as FedTheftDetect framework. The proposed framework integrates deep learning algorithms into a federated learning (FL) architecture through the incorporation of advanced ensemble classifiers to detect behavioral anomalies in daily consumption patterns. A real-world smart meter dataset with significant class imbalance is used to assess the suggested framework. The dataset had significant preprocessing to identify consumption-related anomalies in behavior. Experimental results demonstrate that the suggested framework outperforms the competitive centralized and distributed models. It achieves significant improvements in Accuracy, Precision, Recall, and F1-score, all of which are close to 0.95, which indicates a great predictive capability and reliability. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

14 pages, 1297 KB  
Article
Deep Learning-Based Classification of Zirconia and Metal-Supported Porcelain Fixed Restorations on Panoramic Radiographs
by Zeynep Başağaoğlu Demirekin, Turgay Aydoğan and Yunus Cetin
Diagnostics 2026, 16(7), 972; https://doi.org/10.3390/diagnostics16070972 - 25 Mar 2026
Viewed by 91
Abstract
Background/Objectives: This study aimed to automatically classify Zirconia-based fixed restorations and porcelain-fused-to-metal (PFM) restorations on panoramic radiographs using an artificial intelligence-based model. Unlike previous studies that mainly focused on classifying types of restorations (e.g., crowns, fillings, implants), this research concentrated on material-based [...] Read more.
Background/Objectives: This study aimed to automatically classify Zirconia-based fixed restorations and porcelain-fused-to-metal (PFM) restorations on panoramic radiographs using an artificial intelligence-based model. Unlike previous studies that mainly focused on classifying types of restorations (e.g., crowns, fillings, implants), this research concentrated on material-based differentiation, aiming to provide a more specific contribution to clinical decision support systems. Method: Panoramic radiographs obtained from the archive of Süleyman Demirel University Faculty of Dentistry were included in this study. Radiographs with poor image quality or insufficient visibility of the restoration area were excluded. A total of 593 cropped region-of-interest (ROI) images, labeled by expert prosthodontists using ImageJ software (version 1.54r; National Institutes of Health, Bethesda, MD, USA), were included in the analysis. In order to reduce class imbalance, data augmentation was applied only for images in the Zirconia-based fixed restorations class. By using various image processing techniques such as rotation, reflection and brightness change, the number of samples in the zirconia-based restorations class was increased and thus a balanced dataset was obtained with a close number of samples for both classes. For model training, the pre-trained VGG16 architecture was used with a transfer learning method, and the final layers were retrained and fine-tuned. The model was configured specifically for binary classification. The entire dataset was randomly split into 70% training, 20% validation, and 10% testing. Model performance was evaluated using accuracy, F1-score, sensitivity, and specificity. Results: The model correctly classified 90 out of 94 images in the test dataset, achieving an overall accuracy rate of 96%. For both classes, the precision, recall, and F1-score values were measured in the range of 95% to 96%. Additionally, the Area Under the Curve (AUC) of the ROC curve was calculated as 0.994, and the Average Precision (AP) score was determined to be 0.995. According to the confusion matrix results, only 4 images were misclassified, consisting of 2 false positives and 2 false negatives. Conclusions: The deep learning model demonstrated high accuracy in differentiating zirconia and metal-supported porcelain restorations on panoramic radiographs, suggesting that material-based AI classification may support clinical decision-making in restorative dentistry. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

28 pages, 1320 KB  
Article
WCGAN-GA-RF: Healthcare Fraud Detection via Generative Adversarial Networks and Evolutionary Feature Selection
by Junze Cai, Shuhui Wu, Yawen Zhang, Jiale Shao and Yuanhong Tao
Information 2026, 17(4), 315; https://doi.org/10.3390/info17040315 - 24 Mar 2026
Viewed by 22
Abstract
Healthcare fraud poses significant risks to insurance systems, undermining both financial sustainability and equitable access to care. Accurate detection of fraudulent claims is therefore critical to ensuring the integrity of healthcare insurance operations. However, the increasing sophistication of fraud techniques and limited data [...] Read more.
Healthcare fraud poses significant risks to insurance systems, undermining both financial sustainability and equitable access to care. Accurate detection of fraudulent claims is therefore critical to ensuring the integrity of healthcare insurance operations. However, the increasing sophistication of fraud techniques and limited data availability have undermined the performance of traditional detection approaches. To address these challenges, this paper proposes WCGAN-GA-RF, an integrated fraud detection framework that synergistically combines Wasserstein Conditional Generative Adversarial Network with gradient penalty (WCGAN-GP) for synthetic data generation, genetic algorithm-based feature selection (GA-RF) for dimensionality reduction, and Random Forest (RF) for classification. The proposed framework was empirically validated on a real-world dataset of 16,000 healthcare insurance claims from a Chinese healthcare technology firm, characterized by a 16:1 class imbalance ratio (5.9% fraudulent samples) and 118 original features. Using a stratified 80/20 train–test split with results averaged over five independent runs, the WCGAN-GA-RF framework achieved a precision of 96.47±0.5%, a recall of 97.05±0.4%, and an F1-score of 96.26±0.4%. Notably, the GA-RF component achieved a 65% feature reduction (from 80 to 28 features) while maintaining competitive detection accuracy. Comparative experiments demonstrate that the proposed approach outperforms conventional oversampling methods, including Random Oversampling (ROS), Synthetic Minority Oversampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN), particularly in handling high-dimensional, severely imbalanced healthcare fraud data. Full article
Show Figures

Figure 1

20 pages, 2636 KB  
Article
Inferring Wildfire Ignition Causes in Spain Using Machine Learning and Explainable AI
by Clara Ochoa, Magí Franquesa, Marcos Rodrigues and Emilio Chuvieco
Fire 2026, 9(4), 138; https://doi.org/10.3390/fire9040138 - 24 Mar 2026
Viewed by 169
Abstract
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database [...] Read more.
A substantial proportion of wildfires in Mediterranean regions continue to be recorded without information about the cause or source of ignition, limiting our ability to understand ignition drivers and design effective prevention strategies. In this study, we develop a spatially harmonised wildfire database for mainland Spain by integrating ignition records from the Spanish General Fire Statistics (EGIF) with fire perimeters generated from satellite images. We then apply a Random Forest classifier to infer ignition causes for events lacking cause attribution. To interpret model behaviour, we use Shapley Additive Explanation (SHAP) values at both global and local scales. Results indicate that human-caused ignitions are dominant, with intentional and negligence-related fires accounting for 52.13% of all known events, although they are associated with contrasting climatic and land-use settings. Negligence-related fires tend to occur under hot, dry and windy conditions, often in agricultural interfaces, whereas intentional fires are more frequent under cooler and wetter conditions and in areas with higher population density and land-use change. Lightning-caused fires represent a small fraction of total ignitions (3%) but exhibit a distinct climatic signature, occurring primarily in sparsely populated areas, under intermediate moisture conditions, and often leading to larger burned areas. Despite strong overall model performance (F1-score = 0.82), minority classes (e.g., lightning and fire rekindling, 0.17%) remain challenging to classify, reflecting both data imbalance and uncertainty in causal attribution. Overall, the combined use of machine learning and explainable AI provides a coherent spatial characterisation of wildfire ignition drivers across mainland Spain, highlights systematic differences among ignition causes, and identifies key limitations in existing fire cause records. This framework represents a practical step towards improving fire cause information by integrating remote sensing products with field-based fire reports, thereby supporting more targeted and evidence-based fire risk management. Full article
Show Figures

Figure 1

18 pages, 2903 KB  
Article
Infrasound Signal Classification Fusion Model Based on Double-Branch and Multi-Scale CNN and LSTM
by Hao Yin, Yu Lu, Yunhui Wu, Wei Cheng, Xinliang Pang and Peng Li
Acoustics 2026, 8(2), 21; https://doi.org/10.3390/acoustics8020021 - 24 Mar 2026
Viewed by 154
Abstract
The accurate classification of infrasound events is significant in natural disaster warning, verification of nuclear test bans and geophysical research. Current deep learning-based classification methods mostly focus on denoised and filtered signals. To simplify the process, avoid information loss, and address the issues [...] Read more.
The accurate classification of infrasound events is significant in natural disaster warning, verification of nuclear test bans and geophysical research. Current deep learning-based classification methods mostly focus on denoised and filtered signals. To simplify the process, avoid information loss, and address the issues of incomplete feature extraction by single-scale convolution kernels and the potential loss of physical information by single models, this paper directly utilizes raw infrasound signals and proposes two fusion classification models based on multi-scale Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Experiments were conducted on a typical infrasound signal dataset (comprising four signal types: mountain-associated waves, auroral infrasound waves, volcanic eruptions, and microbaroms). The performances of the two models were compared in terms of accuracy, convergence speed, and stability. The results indicate that both models achieve classification accuracies exceeding 99% with optimal parameter combinations. The dual-branch multi-scale CNN-LSTM model generally outperforms the multi-scale CNN-LSTM model in classification accuracy, while also demonstrating faster convergence speed and better stability. Addressing the class imbalance in the dataset, evaluations using precision, recall, and F1-score further validated the effectiveness of the proposed models. This study demonstrates that the proposed methods can effectively achieve end-to-end classification of raw infrasound signals and are competitive with existing techniques. Full article
Show Figures

Figure 1

25 pages, 2397 KB  
Review
Review on Exploring Machine Learning Classifiers in the Diagnosis of Chronic Kidney Disease
by Sonam Bhandurge, Kuldeep Sambrekar, Rashmi Laxmikant Malghan and Karthik M C Rao
Sci 2026, 8(4), 68; https://doi.org/10.3390/sci8040068 - 24 Mar 2026
Viewed by 191
Abstract
Chronic kidney disease (CKD) is a global healthcare issue that highlights the need for early identification for better quality of life for patients. This study evaluates various machine learning (ML) classifiers on datasets from UCI and self-collected sources in search of the best [...] Read more.
Chronic kidney disease (CKD) is a global healthcare issue that highlights the need for early identification for better quality of life for patients. This study evaluates various machine learning (ML) classifiers on datasets from UCI and self-collected sources in search of the best methods for CKD classification. This review examines commonly used ML models like support vector machine, K-nearest neighbor, naïve Bayes, decision trees, random forest, logistic regression and boosting-based ensemble methods. The results demonstrated the highest performance of ensemble methods. Despite these promising results, challenges related to model integration and interpretability still exist. Transparent models that are reliable and efficient are suitable for enhancement of clinical application(s). By overcoming these challenges, the work highlights importance of ML for CKD detection and treatment paving the way for artificial intelligence (AI)-driven healthcare solutions that are both effective and trustworthy. Full article
Show Figures

Figure 1

32 pages, 3144 KB  
Article
First-Trimester Gestational Diabetes Mellitus Risk Prediction with Machine Learning Techniques: Results from the BORN2020 Cohort Study
by Nikolaos Pazaras, Antonios Siargkas, Antigoni Tranidou, Aikaterini Apostolopoulou, Ioannis Tsakiridis, Panagiotis D. Bamidis, Sofoklis Stavros, Anastasios Potiris, Michail Chourdakis and Themistoklis Dagklis
J. Clin. Med. 2026, 15(6), 2461; https://doi.org/10.3390/jcm15062461 - 23 Mar 2026
Viewed by 190
Abstract
Background: Gestational diabetes mellitus (GDM) affects many pregnancies worldwide and is associated with adverse maternal and fetal outcomes. Current screening at 24–28 weeks limits opportunities for early intervention. We evaluated whether machine learning (ML) models using first-trimester clinical and dietary data can [...] Read more.
Background: Gestational diabetes mellitus (GDM) affects many pregnancies worldwide and is associated with adverse maternal and fetal outcomes. Current screening at 24–28 weeks limits opportunities for early intervention. We evaluated whether machine learning (ML) models using first-trimester clinical and dietary data can predict GDM risk before the standard oral glucose tolerance test. Methods: We analyzed data from 797 pregnant women enrolled in the BORN2020 prospective cohort study (Thessaloniki, Greece). Ten ML algorithms were evaluated across five class-imbalance handling strategies using stratified 5-fold cross-validation, with final evaluation on an independent 20% held-out test set. Features included maternal demographics, obstetric history, lifestyle factors, and 22 dietary micronutrient intakes from the pre-pregnancy period assessed by Food Frequency Questionnaire. Results: The best-performing model (Logistic Regression without resampling) achieved an AUC-ROC of 0.664 (95% CI: 0.542–0.777), with sensitivity of 0.783 and NPV of 0.932 at the pre-specified threshold. The high NPV should be interpreted in the context of the low GDM prevalence (14.7%), as NPV is mathematically dependent on disease prevalence. A reduced nine-feature model using only routine clinical and demographic variables achieved a numerically higher AUC of 0.712 (95% CI: 0.589–0.825), with overlapping confidence intervals, indicating that detailed FFQ-derived micronutrient data did not improve prediction. Maternal age and pre-pregnancy BMI were the strongest individual predictors by SHAP analysis. No model reached the AUC >0.80 threshold for good discrimination. Substantial miscalibration was observed (slope: 0.56; intercept: −1.83), limiting use for absolute risk estimation. Conclusions: This exploratory study demonstrates that first-trimester ML models achieve modest discriminative ability for early GDM prediction, with routine clinical variables performing comparably to models incorporating detailed dietary assessment. These findings should be interpreted with caution, as no external validation cohort was available and the low events-per-variable ratio (~3.8) constrains the reliability of individual model estimates. Substantial miscalibration further limits use for absolute risk estimation. Accordingly, these models should be regarded as exploratory risk-ranking tools only and require external validation and recalibration before any clinical implementation. Full article
Show Figures

Figure 1

23 pages, 1109 KB  
Review
Strategies for Class-Imbalanced Learning in Multi-Sensor Medical Imaging
by Da Zhou, Song Gao and Xinrui Huang
Sensors 2026, 26(6), 1998; https://doi.org/10.3390/s26061998 - 23 Mar 2026
Viewed by 165
Abstract
This narrative critical review addresses class imbalance in medical imaging, particularly within the context of multi-sensor and multi-modal environments, poses a critical challenge to developing reliable AI diagnostic systems. The integration of heterogeneous data from sources like CT, MRI, and PET presents a [...] Read more.
This narrative critical review addresses class imbalance in medical imaging, particularly within the context of multi-sensor and multi-modal environments, poses a critical challenge to developing reliable AI diagnostic systems. The integration of heterogeneous data from sources like CT, MRI, and PET presents a unique opportunity to address data scarcity for rare conditions through fusion techniques. This review provides a structured analysis of strategies to tackle class imbalance, categorizing them into data-centric (e.g., advanced resampling like SMOTE-ENC for mixed data types, GAN-based synthesis) and model-centric (e.g., loss function engineering, transfer learning, and ensemble methods) approaches. Crucially, we highlight how multi-sensor feature fusion and decision-level fusion paradigms can inherently enrich representations for minority classes, offering a powerful frontier beyond single-modality learning. We evaluate each method’s merits, clinical viability, and compliance considerations (e.g., FDA). Finally, we identify emerging trends where imbalance-aware learning synergizes with multi-sensor fusion frameworks, federated learning, and explainable AI, charting a roadmap toward robust, equitable, and clinically deployable diagnostic tools. Our quantitative synthesis shows that data-centric strategies can improve minority class recall by 12–35% in datasets with imbalance ratios (majority:minority) ≥10:1, while model-centric strategies achieve an average AUC improvement of 0.08–0.21 in multi-sensor medical imaging tasks with sample sizes ranging from 50 to 50,000. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop