Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,023)

Search Parameters:
Keywords = class imbalance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 870 KB  
Article
Electricity Theft Detection from Electricity and Gas Measurements Using Machine Learning
by Fayiz Alfaverh, Hock Gan, Volodymyr Miroshnyk, Zaid Bin Saeed, Ihor Blinov, Pavlo Shymaniuk, Pouya Tarassodi and Iosif Mporas
Energies 2026, 19(9), 2045; https://doi.org/10.3390/en19092045 - 23 Apr 2026
Abstract
Electricity theft is a critical source of non-technical losses in modern power systems, causing substantial financial and operational challenges for utilities. Traditional detection methods, such as manual inspections, are inadequate to detect advanced theft techniques, including meter tampering and cyberattacks on smart grids. [...] Read more.
Electricity theft is a critical source of non-technical losses in modern power systems, causing substantial financial and operational challenges for utilities. Traditional detection methods, such as manual inspections, are inadequate to detect advanced theft techniques, including meter tampering and cyberattacks on smart grids. This study introduces a machine learning-based framework for electricity theft detection using the TDD2022 dataset (derived from OEDI) and evaluates multiple algorithms—Random Forest, Decision Tree, XGBoost, LightGBM, CatBoost, Extra Trees, and Logistic Regression. To address class imbalance, SMOTE is applied, while feature selection leverages LASSO and ReliefF. Experiments compare electricity-only data with multi-utility inputs (electricity and gas) under balanced and imbalanced conditions. Results show that tree-based ensembles, particularly Extra Trees combined with SMOTE and ReliefF, achieve superior performance (accuracy >95%, AUC 0.99). Consumer-specific models outperform global models, with commercial classes yielding near-perfect detection, while residential profiles remain challenging. The findings highlight the importance of tailored modeling and feature selection for scalable, accurate theft detection in smart grid environments. Full article
26 pages, 10442 KB  
Article
Resource-Adaptive Semantic Transmission and Client Scheduling for OFDM-Based V2X Communications
by Jiahao Liu, Yuanle Chen, Wei Wu and Feng Tian
Sensors 2026, 26(9), 2615; https://doi.org/10.3390/s26092615 - 23 Apr 2026
Abstract
Proportional, fair scheduling in OFDM-based vehicle-to-everything (V2X) uplink causes the resource-block allocation of each vehicle to vary from slot to slot, yet conventional semantic encoders produce a fixed number of output tokens regardless of the instantaneous channel capacity. When the encoder output exceeds [...] Read more.
Proportional, fair scheduling in OFDM-based vehicle-to-everything (V2X) uplink causes the resource-block allocation of each vehicle to vary from slot to slot, yet conventional semantic encoders produce a fixed number of output tokens regardless of the instantaneous channel capacity. When the encoder output exceeds the slot budget, transmitted features are truncated and the resulting federated learning gradient is corrupted—a problem that affected 23% of training rounds for non-line-of-sight vehicles in our experiments. The difficulty is worsened by a spatial pattern common in urban deployments: vehicles at congested intersections suffer the poorest propagation conditions while carrying the training data most relevant to safety, and throughput-driven client selection excludes them in favor of vehicles with strong channels but uninformative scenes. We address both issues within a single framework for OFDM-based V2X federated learning. On the transmission side, a Sensing-Guided Adaptive Modulation (SGAM) module derives a per-slot token budget from the current resource-block allocation and selects tokens through differentiable Gumbel-TopK pruning with a hard capacity clip, so the transmitted token count stays within the slot budget. On the scheduling side, a Channel-Decoupled Federated Learning (CDFL) module partitions clients independently by channel quality and data complexity, selects diverse representatives per partition via facility location optimization, and corrects for partition-size imbalance through inverse propensity weighting during model aggregation. Experiments on NuScenes with 20 non-IID vehicular clients under realistic OFDM channel simulation demonstrate a Macro-F1 of 0.710 (+8.7 points over the Oort-adapted baseline), zero budget violations throughout training, and a 75% reduction in training variance; the worst-class F1 more than doubles relative to FedAvg. Full article
(This article belongs to the Special Issue Challenges and Future Trends of UAV Communications)
15 pages, 6831 KB  
Article
Multi-Class Arrhythmia Detection from PPG Signals Based on VGG-BiLSTM Hybrid Deep Learning Model
by Shiyong Li, Jiaying Mo, Jiating Pan, Zhengguang Zheng, Qunfeng Tang and Zhencheng Chen
Biosensors 2026, 16(5), 235; https://doi.org/10.3390/bios16050235 - 23 Apr 2026
Abstract
Arrhythmia is a common and potentially life-threatening cardiovascular condition. Photoplethysmography (PPG) has emerged as a noninvasive alternative to electrocardiography for cardiac rhythm monitoring, yet most PPG-based methods remain limited to binary classification. In this study, a new deep learning approach is suggested for [...] Read more.
Arrhythmia is a common and potentially life-threatening cardiovascular condition. Photoplethysmography (PPG) has emerged as a noninvasive alternative to electrocardiography for cardiac rhythm monitoring, yet most PPG-based methods remain limited to binary classification. In this study, a new deep learning approach is suggested for categorizing six arrhythmia types from PPG data: sinus rhythm (SR), premature ventricular contraction (PVC), premature atrial contraction (PAC), ventricular tachycardia (VT), supraventricular tachycardia (SVT), and atrial fibrillation (AF). The raw PPG signal is enhanced by extracting its first and second derivatives to capture morphological features not readily apparent in the original signal. A hybrid architecture, VGG-BiLSTM, is utilized, merging VGG convolutional layers for spatial features extraction with bidirectional long short-term memory layers for modeling temporal dependencies. A stratified data splitting strategy is further adopted to address class imbalance across arrhythmia types. A publicly available dataset containing 46,827 PPG segments from 91 individuals was employed to assess the effectiveness of the suggested technique. The method yielded an overall accuracy, sensitivity, specificity and F1 score of 88.7%, 78.5%, 97.6% and 80.5% correspondingly. Full article
Show Figures

Figure 1

25 pages, 3884 KB  
Article
Deep-Learning-Based 3D Dose Distribution Prediction for VMAT Lung Cancer Treatment Using an Enhanced UNet3D Architecture with Composite Loss Functions
by Philip Chung Yin Mak, Luoyi Kong and Lawrence Wing Chi Chan
Bioengineering 2026, 13(5), 490; https://doi.org/10.3390/bioengineering13050490 (registering DOI) - 23 Apr 2026
Abstract
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that [...] Read more.
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that requires planners’ experience. This can lead to varying levels of plan quality. To improve the quality of radiotherapy treatment plans quickly and accurately, this research presents a new architecture, Enhanced UNet3D, to generate three-dimensional (3-D) dose distributions for lung cancer patients. Enhanced UNet3D utilises a symmetric encoder–decoder architecture with residual connections and a target region-attention module to achieve high accuracy in dose shaping within the PTV. A new composite objective function, Enhanced Combined Loss (ECLoss), that includes both SharpLoss, a structure-aware DVH-guided loss, and 3D gradient regularisation, has been developed to address voxel-level class imbalance and achieve realistic spatial dose falloff. This research utilised a retrospective dataset of 170 VMAT plans to train and validate the proposed model. On the test set (n = 14), the model demonstrated exceptional overall accuracy, with a Mean Absolute Error (MAE) of 0.238 ± 0.075 Gy and a structural similarity index measure (SSIM) of 0.970 ± 0.005. Moreover, the model can perform near-real-time inference at approximately 0.5 s per patient, representing a significant reduction in computational resources compared to other architectures. Therefore, these results demonstrate that the Enhanced UNet3D model with ECLoss is a clinically feasible tool for the rapid evaluation and quality assurance of radiotherapy treatment plans and may reduce the need for manual trial-and-error in VMAT workflows. Full article
Show Figures

Figure 1

18 pages, 1019 KB  
Article
Pose-Driven Cow Behavior Recognition in Complex Barn Environments: A Method Combining Knowledge Distillation and Deployment Optimization
by Jie Hu, Xuan Li, Ruyue Ren, Shujie Wang, Mingkai Yang, Jianing Zhao, Juan Liu and Fuzhong Li
Animals 2026, 16(9), 1301; https://doi.org/10.3390/ani16091301 - 23 Apr 2026
Abstract
Cattle behavior constitutes important phenotypic information reflecting animals’ health status, activity level, and welfare condition, and is therefore of considerable significance for automated monitoring and precision management in smart livestock farming. However, under complex barn conditions, cattle behavior recognition is easily affected by [...] Read more.
Cattle behavior constitutes important phenotypic information reflecting animals’ health status, activity level, and welfare condition, and is therefore of considerable significance for automated monitoring and precision management in smart livestock farming. However, under complex barn conditions, cattle behavior recognition is easily affected by factors such as illumination variation, partial occlusion, background interference, and individual differences, thereby reducing recognition stability and generalization capability. To address these challenges, this study proposes a pose-driven method for cattle behavior recognition in complex barn environments. First, a 16-keypoint annotation scheme suitable for describing bovine posture, termed cow16, was constructed. Based on this scheme, OpenPose was employed to extract heatmaps (HMs) and part affinity fields (PAFs), which were then used to build an intermediate HM/PAF posture representation. Subsequently, this representation was taken as the input to a lightweight convolutional neural network for classifying three behavioral categories: stand, walk, and lying. On this basis, class-imbalance correction during training and a multi-random-seed logits ensemble strategy during inference were further introduced. In addition, knowledge distillation was adopted to transfer knowledge from a high-performance teacher model to a lightweight student model. Experimental results demonstrate that training-stage class-imbalance correction and inference-stage multi-random-seed logits ensembling exhibit strong complementarity; when combined, the AB configuration improves the test-set Macro-F1 by 3.83 percentage points. Moreover, the distilled student model still achieves competitive recognition performance while maintaining 1× inference cost, indicating a favorable trade-off between accuracy and efficiency. This study provides a useful reference for deployment-oriented cattle behavior recognition in smart farming scenarios and offers a lightweight technical basis for subsequent practical applications. Full article
(This article belongs to the Section Cattle)
41 pages, 2440 KB  
Article
Dismantling Binary Opposition in Fraud Detection: A Fuzzy Deep Learning Framework for Imbalanced Transaction Data
by Reham M. Essa, Yasser El-Kassrawy, Amer Alaya and Nevien El-Kassrawy
Risks 2026, 14(5), 98; https://doi.org/10.3390/risks14050098 - 23 Apr 2026
Abstract
In the context of behavioral finance, detecting credit card fraud remains a critical challenge, particularly when dealing with highly imbalanced datasets and ambiguous transaction patterns. This complexity highlights the limitations of traditional fraud detection models, which rely on a rigid binary distinction between [...] Read more.
In the context of behavioral finance, detecting credit card fraud remains a critical challenge, particularly when dealing with highly imbalanced datasets and ambiguous transaction patterns. This complexity highlights the limitations of traditional fraud detection models, which rely on a rigid binary distinction between “fraudulent” and “legitimate” transactions. Such a perspective restricts analysts’ ability to capture the nuanced and uncertain nature of fraudulent behavior, underscoring the need for a more flexible and practical approach. Accordingly, this study draws on Derrida’s deconstructive philosophy of binary oppositions to challenge the dominant dichotomy underlying conventional detection systems. This perspective provides a theoretical foundation for rethinking fraud detection by operationalizing deconstructive principles through the integration of fuzzy rules and machine learning architectures. The proposed approach is designed to address uncertainty, class imbalance, and semantic instability in financial transaction data. By combining fuzzy logic with deep learning, the framework deconstructs the rigid binary classification of transactions, enabling interpretation along a spectrum of legitimacy rather than as mutually exclusive categories. Deep learning techniques identify complex, nonlinear patterns that reveal overlaps between fraudulent and legitimate behaviors, while fuzzy membership functions model uncertainty and capture borderline cases that cannot be effectively handled by binary classification. Full article
20 pages, 3665 KB  
Article
SDS-Former: A Transformer-Based Method for Semantic Segmentation of Arid Land Remote Sensing Imagery
by Yujie Du, Junfu Fan, Kuan Li and Yongrui Li
Algorithms 2026, 19(5), 325; https://doi.org/10.3390/a19050325 - 22 Apr 2026
Abstract
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks [...] Read more.
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks and hinder the extraction of discriminative semantic representations. To address these issues, we propose SDS-Former, a lightweight semantic segmentation network specifically designed for remote sensing imagery in arid environments. SDS-Former incorporates an SSM-inspired Lightweight Semantic Enhancement (LSE) module to strengthen contextual modeling and alleviate the loss of discriminative information in deep features. To tackle scale variations, a Dynamic Selective Feature Fusion (DSFF) module is employed in the decoder to adaptively weight and fuse high-level semantics with low-level spatial details. Furthermore, a Feature Refinement Head (FRH) is introduced to enhance boundary localization and improve the recognition of small-scale and sparsely distributed land cover objects. Extensive ablation and comparative experiments demonstrate that SDS-Former consistently outperforms representative semantic segmentation methods across multiple evaluation metrics. On the Tarim Basin dataset, the proposed network achieves a mean Intersection over Union (mIoU) of 82.51% and an F1 score of 86.47%, indicating its superior effectiveness and robustness. Qualitative results further verify that SDS-Former exhibits clear advantages in distinguishing spectrally similar land cover types and preserving the spatial continuity of ground objects in complex arid-region scenes. Full article
16 pages, 2289 KB  
Proceeding Paper
An Efficient Hybrid Framework for Weld Defect Detection Using GAN, CNN and XGBoost
by Kalyanaraman Pattabiraman, Ashish Patil, Yash Gulavani, Ritik Malik and Atharva Gai
Eng. Proc. 2026, 130(1), 9; https://doi.org/10.3390/engproc2026130009 - 22 Apr 2026
Abstract
Automated detection of defects in welds are inevitable in the assurance of structural integrity, but this faces serious challenges due to the microscopic characteristics of the discontinuities, low visual contrast and infrequent occurrence of defect samples. Conventional deep learning methods, while accurate, often [...] Read more.
Automated detection of defects in welds are inevitable in the assurance of structural integrity, but this faces serious challenges due to the microscopic characteristics of the discontinuities, low visual contrast and infrequent occurrence of defect samples. Conventional deep learning methods, while accurate, often lack interpretability and exhibit low recall for rare defects. This paper proposes a novel hybrid system combining a Generative Adversarial Network (GAN), a Convolutional Neural Network (CNN), and Extreme Gradient Boosting (XGBoost 2.0.0) to enhance weld defect classification performance and transparency. Firstly, a Deep Convolutional GAN (DCGAN) creates synthetic images of the minority classes; thus, the problem of class imbalance is resolved. Then, a pretrained ResNet50V2 CNN is used to extract features of the deep layers from the original images as well as from the generated ones. After that, these features are fed into an XGBoost classifier, which uses tree-based learning to optimize classification results and make the process more understandable to the user. Furthermore, interpretation is also facilitated by Grad-CAM rendering of the CNN regions of interest and SHAP analysis to measure the involvement of the features in XGBoost. Experiments using the available LoHi-WELD datasets show that the overall accuracy is significantly improved, the per-class recall of the rare defects is also enhanced, and the robustness is also improved. The proposed hybrid method not only achieves better results but also generates visual/explainable output, which is very valuable when the system is implemented in industrial welding inspection systems. This paper serves as a liaison between the latest AI technology and the practical interpretability requirements of the mechanical and welding engineering fields. Full article
(This article belongs to the Proceedings of The 19th Global Congress on Manufacturing and Management (GCMM 2025))
Show Figures

Figure 1

16 pages, 1285 KB  
Article
A SMOTE–ViT Framework for Advanced Soil Classification on a Self-Generated Geotechnical Image Database
by Atousa Zohouri Rad, Ahmet Topal, Burcu Tunga and Müge Balkaya
Appl. Sci. 2026, 16(9), 4063; https://doi.org/10.3390/app16094063 - 22 Apr 2026
Abstract
Accurate soil type classification is fundamental to geotechnical engineering, yet traditional laboratory methods are often time consuming and labor intensive. This study investigates the potential of a Transformer-based deep learning framework for the automated classification of complex soil compositions. An image database for [...] Read more.
Accurate soil type classification is fundamental to geotechnical engineering, yet traditional laboratory methods are often time consuming and labor intensive. This study investigates the potential of a Transformer-based deep learning framework for the automated classification of complex soil compositions. An image database for geotechnical analysis is constructed using six distinct geotechnical samples comprising gravel, sand, silt, and clay systematically blended into 80 ternary mixtures. To address the inherent class imbalances in the multi-component dataset, the Synthetic Minority Oversampling Technique (SMOTE) is employed, ensuring robust representation across all categories. The proposed framework utilizes a Vision Transformer (ViT) architecture, leveraging its self-attention mechanism to capture both intricate textural patterns and long-range structural dependencies within the soil matrices. Experimental results demonstrate that the SMOTE–ViT pipeline achieved an overall accuracy of 95.83%, with high precision and recall across diverse ternary compositions. This interdisciplinary approach provides a scalable and high-precision alternative for soil characterization, offering significant potential for real-time decision-making in geotechnical investigation workflows. Full article
Show Figures

Figure 1

18 pages, 2863 KB  
Article
AI-Driven Durian Leaf Disease Classification Using Benchmark CNN Architectures for Precision Agriculture
by Rapeepat Klangbunrueang, Wirapong Chansanam, Natthakan Iam-On and Tossapon Boongoen
Appl. Sci. 2026, 16(9), 4062; https://doi.org/10.3390/app16094062 - 22 Apr 2026
Abstract
Durian (Durio zibethinus Murray) is Thailand’s most economically significant fruit export, yet foliar diseases pose a major threat to productivity and crop quality. Early-stage symptoms of several durian leaf diseases are visually similar, making reliable diagnosis difficult for farmers and even trained [...] Read more.
Durian (Durio zibethinus Murray) is Thailand’s most economically significant fruit export, yet foliar diseases pose a major threat to productivity and crop quality. Early-stage symptoms of several durian leaf diseases are visually similar, making reliable diagnosis difficult for farmers and even trained agronomists. This study aims to develop and evaluate an automated deep learning-based system for durian leaf disease classification under realistic field conditions. A dataset of 6119 leaf images representing six classes—Leaf_Healthy, Leaf_Colletotrichum, Leaf_Algal, Leaf_Phomopsis, Leaf_Blight, and Leaf_Rhizoctonia—was compiled from public datasets and field-collected samples. Six convolutional neural network (CNN) architectures—ConvNeXt, ResNet, DenseNet201, InceptionV3, EfficientNet-B3, and MobileNetV3—were benchmarked using a unified transfer-learning training protocol. Class imbalance was addressed using weighted cross-entropy loss, and performance was evaluated on a stratified held-out test set using accuracy, precision, recall, and F1-score metrics. The results show that ConvNeXt achieved the highest performance with 98.00% accuracy and a weighted F1-score of 0.98, followed by ResNet (96.82%) and DenseNet201 (96.09%), while efficiency-oriented models plateaued near 91%. Confusion matrix analysis revealed consistent misclassification among visually similar disease categories—Leaf_Algal, Leaf_Blight, and Leaf_Phomopsis—indicating biological similarity in lesion appearance rather than model limitations. The best-performing model was deployed as a publicly accessible web application using Gradio, enabling real-time disease diagnosis with an average inference time of approximately 0.54 s per image. Unlike prior studies, this work combines large-scale architecture benchmarking, class imbalance mitigation, and real-world deployment within a single unified framework. These findings demonstrate that modern CNN architectures can provide highly accurate and scalable disease detection tools, supporting precision agriculture by enabling early diagnosis, reducing inappropriate pesticide use, and improving decision-making for durian farmers. Full article
Show Figures

Figure 1

30 pages, 2584 KB  
Article
A Context-Adaptive Gated Embedding Framework for Advanced Clinical Decision-Making
by Donghyeon Kim, Daeho Kim and Okran Jeong
Mathematics 2026, 14(8), 1397; https://doi.org/10.3390/math14081397 - 21 Apr 2026
Abstract
In intensive care units, large-scale clinical time-series data are continuously accumulated through electronic medical records and bedside monitoring systems. However, direct utilization of such data for clinical decision-making remains challenging due to irregular sampling, pervasive missingness, unstructured diagnostic information, and incomplete ICD labeling. [...] Read more.
In intensive care units, large-scale clinical time-series data are continuously accumulated through electronic medical records and bedside monitoring systems. However, direct utilization of such data for clinical decision-making remains challenging due to irregular sampling, pervasive missingness, unstructured diagnostic information, and incomplete ICD labeling. Automated ICD coding constitutes an extreme multi-class classification problem with thousands of long-tailed categories, while intervention prediction tasks, such as mechanical ventilation management, involve rare transition events and severe class imbalance. To address these challenges, we propose CAGE, a hierarchical Clinical Decision Support System framework that integrates diagnosis, time-series signals, and intervention prediction. The framework first infers admission-level diagnostic context using a partial-label Automated ICD Coding module that combines DCNv2 with an Adaptive CLPL loss, producing probability-weighted diagnostic embeddings. These embeddings are subsequently fused with ICU time-series tensors and processed by a multi-branch Temporal Convolutional Network equipped with an ICD-conditioned gating mechanism to predict future ventilation state transitions. The experimental results demonstrate that DCNv2 achieves consistent superiority across all hit@k and probability concentration metrics for ICD coding. For intervention prediction, the proposed method substantially outperforms existing baselines, achieving a Macro-AUC of 98.2, Macro-AUPRC of 77.4, and F1-score of 79.4. These findings indicate that reinjecting diagnostic context as a conditioning variable, together with imbalance-aware loss design, effectively enhances rare-event detection and improves the practical applicability of clinical decision support systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

47 pages, 7226 KB  
Article
Temporal and Behaviour-Aware Multimodal Modelling for Hour-Ahead Hypoglycaemia Prediction During Ramadan Fasting in Type 1 Diabetes
by Mais Alkhateeb, Rawan AlSaad, Samir Brahim Belhaouari, Sarah Aziz, Arfan Ahmed, Hamda Ali, Dabia Al-Mohanadi, Kawsar Mohamud, Najla Al-Naimi, Arwa Alsaud, Hamad Al-Sharshani, Javaid I. Sheikh, Khaled Baagar and Alaa Abd-Alrazaq
Sensors 2026, 26(8), 2552; https://doi.org/10.3390/s26082552 - 21 Apr 2026
Abstract
Ramadan fasting substantially alters meal timing, sleep patterns, and daily activity, thereby increasing the risk of hypoglycaemia in adults with type 1 diabetes (T1D). Although continuous glucose monitoring (CGM) systems provide real-time alerts, these are largely reactive or limited to short prediction horizons, [...] Read more.
Ramadan fasting substantially alters meal timing, sleep patterns, and daily activity, thereby increasing the risk of hypoglycaemia in adults with type 1 diabetes (T1D). Although continuous glucose monitoring (CGM) systems provide real-time alerts, these are largely reactive or limited to short prediction horizons, offering insufficient warning under fasting-related behavioural and circadian disruption. This study aims to evaluate whether behaviour-aware, temporally enriched recurrent deep learning models, leveraging multimodal CGM and wearable-derived signals, can forecast hypoglycaemia one hour ahead during Ramadan and the post-fasting period. In an observational, free-living cohort study conducted in Qatar, 33 adults with T1D were monitored using CGM and a wrist-worn wearable during Ramadan 2023 and the subsequent month. Multimodal data were aggregated into hourly features and organised into rolling 36 h sequences. In addition to physiological signals, explicit temporal and circadian proxy features were engineered, including cyclic time encodings, day–night indicators, and Ramadan-specific behavioural windows (e.g., pre-iftar, iftar, post-iftar, and fasting phases). Recurrent models, including LSTM and BiLSTM architectures, were trained using patient-wise, leak-free splits, with focal loss applied to address class imbalance. Model performance was evaluated on a held-out, naturally imbalanced test set using ROC AUC, precision–recall AUC, recall, and probability calibration, alongside cross-phase evaluation between Ramadan and post-fasting periods. Following quality control, 1164 participant-days were retained, with hypoglycaemia accounting for approximately 4% of hourly observations. Temporal feature enrichment and the use of a 36 h lookback window improved both discrimination and calibration, with performance stabilizing beyond this horizon. On the imbalanced test set, the best-performing multimodal model achieved an ROC AUC of 0.867 and a precision–recall AUC of 0.341, identifying 77% of next-hour hypoglycaemic events at a sensitivity-focused operating point (precision = 0.14). The selected BiLSTM model demonstrated good probability calibration (Brier score ≈ 0.03). Models trained using wearable-derived inputs alone achieved comparable discrimination and, in some configurations, higher precision–recall AUC than CGM-only baselines. Notably, models trained on the original imbalanced data outperformed resampled variants, suggesting that temporal and behavioural features provided sufficient discriminatory signal without requiring aggressive class balancing. Cross-phase evaluation indicated robust generalisation, particularly for the BiLSTM model. Overall, behaviour-aware, temporally enriched multimodal models can provide calibrated, hour-ahead hypoglycaemia risk estimates during Ramadan fasting in adults with T1D, enabling proactive intervention beyond reactive CGM alerts. Explicit modelling of circadian and behavioural dynamics enhances predictive performance under real-world class imbalance. Furthermore, integrating wearable-derived behavioural and physiological signals adds predictive value beyond CGM alone, supporting robustness across varying levels of contextual data availability. External validation and prospective clinical evaluation are required prior to deployment. Full article
(This article belongs to the Special Issue AI and Big Data Analytics for Medical E-Diagnosis)
Show Figures

Graphical abstract

24 pages, 34048 KB  
Article
Unsupervised Hyperspectral Unmixing Based on Multi-Faceted Graph Representation and Curriculum Learning
by Ran Liu, Junfeng Pu, Yanru Chen, Yanling Miao, Dawei Liu and Qi Wang
Remote Sens. 2026, 18(8), 1250; https://doi.org/10.3390/rs18081250 - 21 Apr 2026
Abstract
Hyperspectral unmixing aims to estimate endmember spectra and their corresponding abundance fractions at the subpixel scale, which is a critical preprocessing step for quantitative analysis of hyperspectral remote sensing imagery. While deep learning-based methods have achieved remarkable progress, three fundamental challenges remain: (i) [...] Read more.
Hyperspectral unmixing aims to estimate endmember spectra and their corresponding abundance fractions at the subpixel scale, which is a critical preprocessing step for quantitative analysis of hyperspectral remote sensing imagery. While deep learning-based methods have achieved remarkable progress, three fundamental challenges remain: (i) reliance on a single shared spatial prior that cannot decouple the heterogeneous spatial patterns of different land covers; (ii) the lack of synergy in jointly optimizing endmember extraction and abundance estimation; (iii) the poor robustness of unsupervised training to complex mixtures, noise, and class imbalance. To address these issues, we propose a novel unsupervised unmixing framework that integrates adaptive orthogonal multi-faceted graph representation with curriculum learning. Specifically, we design an Adaptive Orthogonal Multi-Faceted Graph Generator (AOMFG) to learn a set of independent orthogonal graph structures, achieving spatially informed decoupling of land cover patterns. Then, a dual-branch collaborative optimization network is constructed: a Graph Convolutional Network (GCN) branch that incorporates the learned spatial topological priors for abundance estimation, and a 1D Convolutional Neural Network (1DCNN) branch that employs a query-attention mechanism to adaptively aggregate pure spectral features for endmember extraction. Finally, we introduce a three-stage curriculum learning strategy that progressively fine-tunes the model, which significantly enhances its performance. Extensive experiments on three widely used real-world benchmark datasets demonstrate that our proposed framework consistently outperforms state-of-the-art methods in both endmember extraction and abundance estimation accuracy. Comprehensive ablation studies, parameter sensitivity analysis, and noise robustness tests further validate the effectiveness of each core component. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

20 pages, 977 KB  
Article
An Enhanced Multi-Task Deep Learning Framework for Joint Prediction of Customer Churn and Downsell
by Qiang Zhang, Lihong Zhang and Yanfeng Chai
Appl. Sci. 2026, 16(8), 4014; https://doi.org/10.3390/app16084014 - 21 Apr 2026
Abstract
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early [...] Read more.
Customer churn refers to the termination of a customer’s business relationship with a bank, representing a direct loss of future revenue. Product downsell manifests as a reduction in the number of financial products held or a downgrade in service tier, often signaling early customer disengagement. Accurately identifying customers at risk of these two behaviors has become a cornerstone of profitable growth in the competitive retail banking industry as downsell frequently serves as a precursor to total churn. However, the existing research typically treats these highly correlated behaviors as independent prediction tasks, overlooking their intrinsic link and failing to address the critical challenges of class imbalance and regulatory demands for model interpretability. To tackle these problems, we propose an enhanced multi-task learning network (EMTL-Net), a deep learning framework specifically designed to capture the nuanced interplay between churn and downsell behaviors. EMTL-Net introduces an explicit feature interaction module to enhance the modeling of high-order feature relationships and utilizes a shared representation layer to extract universal customer risk patterns, enabling the joint prediction of churn and downsell. Furthermore, we employ Focal Loss as the training objective to dynamically adjust sample weights, effectively mitigating the class imbalance problem. Critically, to meet financial compliance requirements, we implement a SHAP-based interpretation mechanism that is compatible with multi-task outputs, providing preliminary insights into feature importance. Formal validation of interpretability claims remains an important direction for future research. The experimental results on a publicly available pedagogical bank customer benchmark dataset demonstrate that EMTL-Net achieves excellent performance on both tasks. For churn prediction, the model achieves an AUC of 0.8259, an accuracy of 0.8361, and an F1-score of 0.6235, significantly outperforming the existing baseline models. For downsell prediction (noting that the downsell label is rule-derived from the number of products held), the model achieves an AUC of 0.8932, an accuracy of 0.8571, and an F1-score of 0.7504. Ablation studies confirm the critical contributions of the explicit feature interaction module, Focal Loss, and the residual structure to model performance. Crucially, the interpretability analysis corroborates business intuition by identifying customer age, account balance, and product holdings as dominant churn drivers—a consistency that reinforces the model’s credibility and practical utility in high-stakes financial environments. Full article
Show Figures

Figure 1

27 pages, 2500 KB  
Article
Injury Severity Prediction for Older Driver Accidents via Denoised Cascade Framework and Probability Calibration
by Yiyong Pan, Xilai Jia, Jieru Huang, Gen Li and Pengyu Xu
World Electr. Veh. J. 2026, 17(4), 219; https://doi.org/10.3390/wevj17040219 - 20 Apr 2026
Abstract
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby [...] Read more.
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby compromising sensitivity to high-risk outcomes. To overcome these limitations, this study develops a Log-Loss Cleaned and Probability-Calibrated Cascade (L-CSC) framework by strategically integrating existing advanced algorithmic components for robust and reliable severity prediction. Initially, a Log-Loss-based noise filtering mechanism is implemented to purge outliers and ambiguous samples from the training data, thereby enabling higher-quality representation learning. Subsequently, a two-stage cascade architecture is designed to decouple the classification task. Stage I employs a Preliminary Screening Model, optimized via Bayesian optimization for F2-score, to specifically maximize the recall for severe and fatal cases. In Stage II, a Stacking ensemble classifier is deployed to achieve a fine-grained classification of injury levels among the cases identified in the initial screening. Finally, Isotonic Regression is employed to calibrate the output probabilities from both stages, ensuring that the resulting risk estimations are statistically sound and reliable. Empirical evaluations demonstrate that the L-CSC framework effectively balances overall performance with critical risk detection, achieving a robust Macro-F1 of 0.7296. Specifically, compared to the best-performing baseline, the recall and F1-score for the critical severe and fatal category showed relative improvements of over 82% and 62%, respectively. Ablation analyses further substantiate the vital contributions of both the data cleaning and calibration modules. This research demonstrates that the cascaded framework effectively mitigates the biases inherent in imbalanced datasets, providing a robust algorithmic foundation to potentially support future traffic safety interventions. Full article
(This article belongs to the Section Marketing, Promotion and Socio Economics)
Show Figures

Figure 1

Back to TopTop