Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,338)

Search Parameters:
Keywords = balanced dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 651 KB  
Article
Enhancement Without Contrast: Stability-Aware Multicenter Machine Learning for Glioma MRI Imaging
by Sajad Amiri, Shahram Taeb, Sara Gharibi, Setareh Dehghanfard, Somayeh Sadat Mehrnia, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim and Mohammad R. Salmanpour
Inventions 2026, 11(1), 11; https://doi.org/10.3390/inventions11010011 - 26 Jan 2026
Abstract
Gadolinium-based contrast agents (GBCAs) are vital for glioma imaging yet pose safety, cost, and accessibility issues; predicting contrast enhancement from non-contrast MRI via machine learning (ML) provides a safer, economical alternative, as enhancement indicates tumor aggressiveness and informs treatment planning. However, scanner and [...] Read more.
Gadolinium-based contrast agents (GBCAs) are vital for glioma imaging yet pose safety, cost, and accessibility issues; predicting contrast enhancement from non-contrast MRI via machine learning (ML) provides a safer, economical alternative, as enhancement indicates tumor aggressiveness and informs treatment planning. However, scanner and population variability hinder robust model selection. To overcome this, a stability-aware framework was developed to identify reproducible ML pipelines for predicting glioma contrast enhancement across multicenter cohorts. A total of 1367 glioma cases from four TCIA datasets (UCSF-PDGM, UPENN-GB, BRATS-Africa, BRATS-TCGA-LGG) were analyzed, using non-contrast T1-weighted images as input and deriving enhancement status from paired post-contrast T1-weighted images; 108 IBSI-standardized radiomics features were extracted via PyRadiomics 3.1, then systematically combined with 48 dimensionality reduction algorithms and 25 classifiers into 1200 pipelines, evaluated through rotational validation (training on three datasets, external testing on the fourth, repeated across rotations) incorporating five-fold cross-validation and a composite score penalizing instability via standard deviation. Cross-validation accuracies spanned 0.91–0.96, with external testing yielding 0.87 (UCSF-PDGM), 0.98 (UPENN-GB), and 0.95 (BRATS-Africa), averaging ~0.93; F1, precision, and recall remained stable (0.87–0.96), while ROC-AUC varied (0.50–0.82) due to cohort heterogeneity, with the MI + ETr pipeline ranking highest for balanced accuracy and stability. This framework enables reliable, generalizable prediction of contrast enhancement from non-contrast glioma MRI, minimizing GBCA dependence and offering a scalable template for reproducible ML in neuro-oncology. Full article
(This article belongs to the Special Issue Machine Learning Applications in Healthcare and Disease Prediction)
Show Figures

Figure 1

31 pages, 5762 KB  
Article
Rarity-Aware Stratified Active Learning for Class-Imbalanced Industrial Object Detection
by Zhor Benhafid and Sid Ahmed Selouani
Appl. Sci. 2026, 16(3), 1236; https://doi.org/10.3390/app16031236 - 26 Jan 2026
Abstract
Object detection systems deployed in industrial environments are often constrained by limited annotation budgets, severe class imbalance, and heterogeneous visual conditions. Active learning (AL) aims to reduce labeling costs by selecting informative samples; however, existing strategies struggle to simultaneously ensure robust performance, rare-class [...] Read more.
Object detection systems deployed in industrial environments are often constrained by limited annotation budgets, severe class imbalance, and heterogeneous visual conditions. Active learning (AL) aims to reduce labeling costs by selecting informative samples; however, existing strategies struggle to simultaneously ensure robust performance, rare-class coverage, and stability under realistic industrial constraints. In this work, we propose a rarity-aware, stratified AL framework for industrial object detection that explicitly aligns sample selection with class imbalance and annotation efficiency. The method relies on a composite image-level score that jointly captures model uncertainty, informativeness, and complementary diversity cues, while adaptively emphasizing rare classes. Crucially, a stratified querying mechanism is introduced to explicitly regulate class-wise sample allocation during selection, playing a key role in improving performance stability and rare-class coverage under severe imbalance, without sacrificing global informativeness. The proposed approach operates purely at the data-selection level, making it detector-agnostic and directly applicable to modern object detection pipelines. Experiments conducted on two real-world industrial datasets involving lobster and snow crab parts, using YOLOv10 and YOLOv12, demonstrate improved training stability and annotation efficiency across balanced, imbalanced, and noisy settings over multiple active learning cycles up to 15% labeled data. Complementary comparisons with fully supervised training further show that using only 45–65% of the labeled data is sufficient to retain more than 97% of full-supervision mAP@50 and over 90% of mAP@50:95. Full article
(This article belongs to the Special Issue AI in Industry 4.0)
Show Figures

Figure 1

22 pages, 7617 KB  
Article
DAS-YOLO: Adaptive Structure–Semantic Symmetry Calibration Network for PCB Defect Detection
by Weipan Wang, Wengang Jiang, Lihua Zhang, Siqing Chen and Qian Zhang
Symmetry 2026, 18(2), 222; https://doi.org/10.3390/sym18020222 - 25 Jan 2026
Abstract
Industrial-grade printed circuit boards (PCBs) exhibit high structural order and inherent geometric symmetry, where minute surface defects essentially constitute symmetry-breaking anomalies that disrupt topological integrity. Detecting these anomalies is quite challenging due to issues like scale variation and low contrast. Therefore, this paper [...] Read more.
Industrial-grade printed circuit boards (PCBs) exhibit high structural order and inherent geometric symmetry, where minute surface defects essentially constitute symmetry-breaking anomalies that disrupt topological integrity. Detecting these anomalies is quite challenging due to issues like scale variation and low contrast. Therefore, this paper proposes a symmetry-aware object detection framework, DAS-YOLO, based on an improved YOLOv11. The U-shaped adaptive feature extraction module (Def-UAD) reconstructs the C3K2 unit, overcoming the geometric limitations of standard convolutions through a deformation adaptation mechanism. This significantly enhances feature extraction capabilities for irregular defect topologies. A semantic-aware module (SADRM) is introduced at the backbone and neck regions. The lightweight and efficient ESSAttn improves the distinguishability of small or weak targets. At the same time, to address information asymmetry between deep and shallow features, an iterative attention feature fusion module (IAFF) is designed. By dynamically weighting and calibrating feature biases, it achieves structured coordination and balanced multi-scale representation. To evaluate the validity of the proposed method, we carried out comprehensive experiments using publicly accessible datasets focused on PCB defects. The results show that the Recall, mAP@50, and mAP@50-95 of DAS-YOLO reached 82.60%, 89.50%, and 46.60%, respectively, which are 3.7%, 1.8%, and 2.9% higher than those of the baseline model, YOLOv11n. Comparisons with mainstream detectors such as GD-YOLO and SRN further demonstrate a significant advantage in detection accuracy. These results confirm that the proposed framework offers a solution that strikes a balance between accuracy and practicality in addressing the key challenges in PCB surface defect detection. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

31 pages, 4489 KB  
Article
A Hybrid Intrusion Detection Framework Using Deep Autoencoder and Machine Learning Models
by Salam Allawi Hussein and Sándor R. Répás
AI 2026, 7(2), 39; https://doi.org/10.3390/ai7020039 - 25 Jan 2026
Abstract
This study provides a detailed comparative analysis of a three-hybrid intrusion detection method aimed at strengthening network security through precise and adaptive threat identification. The proposed framework integrates an Autoencoder-Gaussian Mixture Model (AE-GMM) with two supervised learning techniques, XGBoost and Logistic Regression, combining [...] Read more.
This study provides a detailed comparative analysis of a three-hybrid intrusion detection method aimed at strengthening network security through precise and adaptive threat identification. The proposed framework integrates an Autoencoder-Gaussian Mixture Model (AE-GMM) with two supervised learning techniques, XGBoost and Logistic Regression, combining deep feature extraction with interpretability and stable generalization. Although the downstream classifiers are trained in a supervised manner, the hybrid intrusion detection nature of the framework is preserved through unsupervised representation learning and probabilistic modeling in the AE-GMM stage. Two benchmark datasets were used for evaluation: NSL-KDD, representing traditional network behavior, and UNSW-NB15, reflecting modern and diverse traffic patterns. A consistent preprocessing pipeline was applied, including normalization, feature selection, and dimensionality reduction, to ensure fair comparison and efficient training. The experimental findings show that hybridizing deep learning with gradient-boosted and linear classifiers markedly enhances detection performance and resilience. The AE–GMM-XGBoost model achieved superior outcomes, reaching an F1-score above 0.94 ± 0.0021 and an AUC greater than 0.97 on both datasets, demonstrating high accuracy in distinguishing legitimate and malicious traffic. AE-GMM-Logistic Regression also achieved strong and balanced performance, recording an F1-score exceeding 0.91 ± 0.0020 with stable generalization across test conditions. Conversely, the standalone AE-GMM effectively captured deep latent patterns but exhibited lower recall, indicating limited sensitivity to subtle or emerging attacks. These results collectively confirm that integrating autoencoder-based representation learning with advanced supervised models significantly improves intrusion detection in complex network settings. The proposed framework therefore provides a solid and extensible basis for future research in explainable and federated intrusion detection, supporting the development of adaptive and proactive cybersecurity defenses. Full article
Show Figures

Figure 1

20 pages, 4006 KB  
Article
Deformable Pyramid Sparse Transformer for Semi-Supervised Driver Distraction Detection
by Qiang Zhao, Zhichao Yu, Jiahui Yu, Simon James Fong, Yuchu Lin, Rui Wang and Weiwei Lin
Sensors 2026, 26(3), 803; https://doi.org/10.3390/s26030803 (registering DOI) - 25 Jan 2026
Abstract
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction [...] Read more.
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction detection framework based on teacher–student learning and deformable pyramid feature fusion. The framework leverages a limited amount of labeled data together with abundant unlabeled samples to achieve robust and scalable distraction detection. An adaptive pseudo-label optimization strategy is introduced, incorporating category-aware pseudo-label thresholding, delayed pseudo-label scheduling, and a confidence-weighted pseudo-label loss to dynamically balance pseudo-label quality and training stability. To enhance fine-grained perception of subtle driver behaviors, a Deformable Pyramid Sparse Transformer (DPST) module is integrated into a lightweight YOLOv11 detector, enabling precise multi-scale feature alignment and efficient cross-scale semantic fusion. Furthermore, a teacher-guided feature consistency distillation mechanism is employed to promote semantic alignment between teacher and student models at the feature level, mitigating the adverse effects of noisy pseudo-labels. Extensive experiments conducted on the Roboflow Distracted Driving Dataset demonstrate that the proposed method outperforms representative fully supervised baselines in terms of mAP@0.5 and mAP@0.5:0.95 while maintaining a balanced trade-off between precision and recall. These results indicate that the proposed framework provides an effective and practical solution for real-world driver monitoring systems under limited annotation conditions. Full article
(This article belongs to the Section Vehicular Sensing)
20 pages, 1854 KB  
Article
Dual-Optimized Genetic Algorithm for Edge-Ready IoT Intrusion Detection on Raspberry Pi
by Khawlah Harasheh, Satinder Gill, Kendra Brinkley, Salah Garada, Dindin Aro Roque, Hayat MacHrouhi, Janera Manning-Kuzmanovski, Jesus Marin-Leal, Melissa Isabelle Arganda-Villapando and Sayed Ahmad Shah Sekandary
J 2026, 9(1), 3; https://doi.org/10.3390/j9010003 - 25 Jan 2026
Abstract
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed [...] Read more.
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed hyperparameters, achieving 99.20% accuracy and ~0.002 ms/sample inference latency on a desktop machine; this configuration is suitable for high-performance platforms but is not intended for constrained IoT deployment. Second, we propose a lightweight, edge-oriented IDS that applies ANOVA-based filter feature selection and uses a genetic algorithm (GA) for the bounded hyperparameter tuning of the classifier under stratified cross-validation, enabling efficient execution on Raspberry Pi-class devices. The lightweight IDS achieves 98.95% accuracy with ~4.3 ms/sample end-to-end inference latency on Raspberry Pi while detecting both low-volume and high-volume (DoS/DDoS) attacks. Experiments are conducted in a Raspberry Pi-based real lab using an up-to-date mixed-modal dataset combining system/network telemetry and heterogeneous physical sensors. Overall, the proposed framework demonstrates a practical, hardware-aware, and reproducible way to balance detection performance and edge-level latency using established techniques for real-world IoT IDS deployment. Full article
17 pages, 566 KB  
Article
AE-CTGAN: Autoencoder–Conditional Tabular GAN for Multi-Omics Imbalanced Class Handling and Cancer Outcome Prediction
by Ibrahim Al-Hurani, Sara H. ElFar, Abedalrhman Alkhateeb and Salama Ikki
Algorithms 2026, 19(2), 95; https://doi.org/10.3390/a19020095 (registering DOI) - 25 Jan 2026
Abstract
The rapid advancement of sequencing technologies has led to the generation of complex multi-omics data, which are often high-dimensional, noisy, and imbalanced, posing significant challenges for traditional machine learning methods. The novelty of this work resides in the architecture-level integration of autoencoders with [...] Read more.
The rapid advancement of sequencing technologies has led to the generation of complex multi-omics data, which are often high-dimensional, noisy, and imbalanced, posing significant challenges for traditional machine learning methods. The novelty of this work resides in the architecture-level integration of autoencoders with Generative Adversarial Network (GAN) and Conditional Tabular Generative Adversarial Network (CTGAN) models, where the autoencoder is employed for latent feature extraction and noise reduction, while GAN-based models are used for realistic sample generation and class imbalance mitigation in multi-omics cancer datasets. This study proposes a novel framework that combines an autoencoder for dimensionality reduction and a CTGAN for generating synthetic samples to balance underrepresented classes. The process starts with selecting the most discriminative features, then extracting latent representations for each omic type, merging them, and generating new minority samples. Finally, all samples are used to train a neural network to predict specific cancer outcomes, defined here as clinically relevant biomarkers or patient characteristics. In this work, the considered outcome in the bladder cancer is Tumor Mutational Burden (TMB), while the breast cancer outcome is menopausal status, a key factor in treatment planning. Experimental results show that the proposed model achieves high precision, with an average precision of 0.9929 for TMB prediction in bladder cancer and 0.9748 for menopausal status in breast cancer, and reaches perfect precision (1.000) for the positive class in both cases. In addition, the proposed AE–CTGAN framework consistently outperformed an autoencoder combined with a standard GAN across all evaluation metrics, achieving average accuracies of 0.9929 and 0.9748, recall values of 0.9846 and 0.9777, and F1-scores of 0.9922 for bladder and breast cancer datasets, respectively. A comparative fidelity analysis in the latent space further demonstrated the superiority of CTGAN, reducing the average Euclidean distance between real and synthetic samples by approximately 72% for bladder cancer and by up to 84% for breast cancer compared to a standard GAN. These findings confirm that CTGAN generates high-fidelity synthetic samples that preserve the structural characteristics of real multi-omics data, leading to more reliable class balancing and improved predictive performance. Overall, the proposed framework provides an effective and robust solution for handling class imbalance in multi-omics cancer data and enhances the accuracy of clinically relevant outcome prediction. Full article
Show Figures

Figure 1

30 pages, 7439 KB  
Article
Traffic Forecasting for Industrial Internet Gateway Based on Multi-Scale Dependency Integration
by Tingyu Ma, Jiaqi Liu, Panfeng Xu and Yan Song
Sensors 2026, 26(3), 795; https://doi.org/10.3390/s26030795 (registering DOI) - 25 Jan 2026
Abstract
Industrial gateways serve as critical data aggregation points within the Industrial Internet of Things (IIoT), enabling seamless data interoperability that empowers enterprises to extract value from equipment data more efficiently. However, their role exposes a fundamental trade-off between computational efficiency and prediction accuracy—a [...] Read more.
Industrial gateways serve as critical data aggregation points within the Industrial Internet of Things (IIoT), enabling seamless data interoperability that empowers enterprises to extract value from equipment data more efficiently. However, their role exposes a fundamental trade-off between computational efficiency and prediction accuracy—a contradiction yet to be fully resolved by existing approaches. The rapid proliferation of IoT devices has led to a corresponding surge in network traffic, posing significant challenges for traffic forecasting methods, while deep learning models like Transformers and GNNs demonstrate high accuracy in traffic prediction, their substantial computational and memory demands hinder effective deployment on resource-constrained industrial gateways, while simple linear models offer relative simplicity, they struggle to effectively capture the complex characteristics of IIoT traffic—which often exhibits high nonlinearity, significant burstiness, and a wide distribution of time scales. The inherent time-varying nature of traffic data further complicates achieving high prediction accuracy. To address these interrelated challenges, we propose the lightweight and theoretically grounded DOA-MSDI-CrossLinear framework, redefining traffic forecasting as a hierarchical decomposition–interaction problem. Unlike existing approaches that simply combine components, we recognize that industrial traffic inherently exhibits scale-dependent temporal correlations requiring explicit decomposition prior to interaction modeling. The Multi-Scale Decomposable Mixing (MDM) module implements this concept through adaptive sequence decomposition, while the Dual Dependency Interaction (DDI) module simultaneously captures dependencies across time and channels. Ultimately, decomposed patterns are fed into an enhanced CrossLinear model to predict flow values for specific future time periods. The Dream Optimization Algorithm (DOA) provides bio-inspired hyperparameter tuning that balances exploration and exploitation—particularly suited for the non-convex optimization scenarios typical in industrial forecasting tasks. Extensive experiments on real industrial IoT datasets thoroughly validate the effectiveness of this approach. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

36 pages, 1564 KB  
Article
Transformer-Based Multi-Source Transfer Learning for Intrusion Detection Models with Privacy and Efficiency Balance
by Baoqiu Yang, Guoyin Zhang and Kunpeng Wang
Entropy 2026, 28(2), 136; https://doi.org/10.3390/e28020136 - 24 Jan 2026
Viewed by 55
Abstract
The current intrusion detection methods suffer from deficiencies in terms of cross-domain adaptability, privacy preservation, and limited effectiveness in detecting minority-class attacks. To address these issues, a novel intrusion detection model framework, TrMulS, is proposed that integrates federated learning, generative adversarial networks with [...] Read more.
The current intrusion detection methods suffer from deficiencies in terms of cross-domain adaptability, privacy preservation, and limited effectiveness in detecting minority-class attacks. To address these issues, a novel intrusion detection model framework, TrMulS, is proposed that integrates federated learning, generative adversarial networks with multispace feature enhancement ability, and transformers with multi-source transfer ability. First, at each institution (source domain), local spatial features are extracted through a CNN, multiple subsets are constructed (to solve the feature singularity problem), and the multihead self-attention mechanism of the transformer is utilized to capture the correlation of features. Second, the synthetic samples of the target domain are generated on the basis of the improved Exchange-GAN, and the cross-domain transfer module is designed by combining the Maximum Mean Discrepancy (MMD) to minimize the feature distribution difference between the source domain and the target domain. Finally, the federated transfer learning strategy is adopted. The model parameters of each local institution are encrypted and uploaded to the target server and then aggregated to generate the global model. These steps iterate until convergence, yielding the globally optimal model. Experiments on the ISCX2012, KDD99 and NSL-KDD intrusion detection standard datasets show that the detection accuracy of this method is significantly improved in cross-domain scenarios. This paper presents a novel paradigm for cross-domain security intelligence analysis that considers efficiency, privacy and balance. Full article
Show Figures

Figure 1

20 pages, 1978 KB  
Article
UAV-Based Forest Fire Early Warning and Intervention Simulation System with High-Accuracy Hybrid AI Model
by Muhammet Sinan Başarslan and Hikmet Canlı
Appl. Sci. 2026, 16(3), 1201; https://doi.org/10.3390/app16031201 - 23 Jan 2026
Viewed by 155
Abstract
In this study, a hybrid deep learning model that combines the VGG16 and ResNet101V2 architectures is proposed for image-based fire detection. In addition, a balanced drone guidance algorithm is developed to efficiently assign tasks to available UAVs. In the fire detection phase, the [...] Read more.
In this study, a hybrid deep learning model that combines the VGG16 and ResNet101V2 architectures is proposed for image-based fire detection. In addition, a balanced drone guidance algorithm is developed to efficiently assign tasks to available UAVs. In the fire detection phase, the hybrid model created by combining the VGG16 and ResNet101V2 architectures has been optimized with Global Average Pooling and layer merging techniques to increase classification success. The DeepFire dataset was used throughout the training process, achieving an extremely high accuracy rate of 99.72% and 100% precision. After fire detection, a task assignment algorithm was developed to assign existing drones to fire points at minimum cost and with balanced load distribution. This algorithm performs task assignments using the Hungarian (Kuhn–Munkres) method and cost optimization, and is adapted to direct approximately equal numbers of drones to each fire when the number of fires is less than the number of drones. The developed system was tested in a Python-based simulation environment and evaluated using performance metrics such as total intervention time, energy consumption, and task balance. The results demonstrate that the proposed hybrid model provides highly accurate fire detection and that the task assignment system creates balanced and efficient intervention scenarios. Full article
Show Figures

Figure 1

26 pages, 8183 KB  
Article
MEE-DETR: Multi-Scale Edge-Aware Enhanced Transformer for PCB Defect Detection
by Xiaoyu Ma, Xiaolan Xie and Yuhui Song
Electronics 2026, 15(3), 504; https://doi.org/10.3390/electronics15030504 - 23 Jan 2026
Viewed by 95
Abstract
Defect inspection of Printed Circuit Board (PCB) is essential for maintaining the safety and reliability of electronic products. With the continuous trend toward smaller components and higher integration levels, identifying tiny imperfections on densely packed PCB structures has become increasingly difficult and remains [...] Read more.
Defect inspection of Printed Circuit Board (PCB) is essential for maintaining the safety and reliability of electronic products. With the continuous trend toward smaller components and higher integration levels, identifying tiny imperfections on densely packed PCB structures has become increasingly difficult and remains a major challenge for current inspection systems. To tackle this problem, this study proposes the Multi-scale Edge-Aware Enhanced Detection Transformer (MEE-DETR), a deep learning-based object detection method. Building upon the RT-DETR framework, which is grounded in Transformer-based machine learning, the proposed approach systematically introduces enhancements at three levels: backbone feature extraction, feature interaction, and multi-scale feature fusion. First, the proposed Edge-Strengthened Backbone Network (ESBN) constructs multi-scale edge extraction and semantic fusion pathways, effectively strengthening the structural representation of shallow defect edges. Second, the Entanglement Transformer Block (ETB), synergistically integrates frequency self-attention, spatial self-attention, and a frequency–spatial entangled feed-forward network, enabling deep cross-domain information interaction and consistent feature representation. Finally, the proposed Adaptive Enhancement Feature Pyramid Network (AEFPN), incorporating the Adaptive Cross-scale Fusion Module (ACFM) for cross-scale adaptive weighting and the Enhanced Feature Extraction C3 Module (EFEC3) for local nonlinear enhancement, substantially improves detail preservation and semantic balance during feature fusion. Experiments conducted on the PKU-Market-PCB dataset reveal that MEE-DETR delivers notable performance gains. Specifically, Precision, Recall, and mAP50–95 improve by 2.5%, 9.4%, and 4.2%, respectively. In addition, the model’s parameter size is reduced by 40.7%. These results collectively indicate that MEE-DETR achieves excellent detection performance with a lightweight network architecture. Full article
20 pages, 1369 KB  
Article
Symmetry-Aware Interpretable Anomaly Alarm Optimization Method for Power Monitoring Systems Based on Hierarchical Attention Deep Reinforcement Learning
by Zepeng Hou, Qiang Fu, Weixun Li, Yao Wang, Zhengkun Dong, Xianlin Ye, Xiaoyu Chen and Fangyu Zhang
Symmetry 2026, 18(2), 216; https://doi.org/10.3390/sym18020216 - 23 Jan 2026
Viewed by 174
Abstract
With the rapid advancement of smart grids driven by renewable energy integration and the extensive deployment of supervisory control and data acquisition (SCADA) and phasor measurement units (PMUs), addressing the escalating alarm flooding via intelligent analysis of large-scale alarm data is pivotal to [...] Read more.
With the rapid advancement of smart grids driven by renewable energy integration and the extensive deployment of supervisory control and data acquisition (SCADA) and phasor measurement units (PMUs), addressing the escalating alarm flooding via intelligent analysis of large-scale alarm data is pivotal to safeguarding the safe and stable operation of power grids. To tackle these challenges, this study introduces a pioneering alarm optimization framework based on symmetry-driven crowdsourced active learning and interpretable deep reinforcement learning (DRL). Firstly, an anomaly alarm annotation method integrating differentiated crowdsourcing and active learning is proposed to mitigate the inherent asymmetry in data distribution. Secondly, a symmetrically structured DRL-based hierarchical attention deep Q-network is designed with a dual-path encoder to balance the processing of multi-scale alarm features. Finally, a SHAP-driven interpretability framework is established, providing global and local attribution to enhance decision transparency. Experimental results on a real-world power alarm dataset demonstrate that the proposed method achieves a Fleiss’ Kappa of 0.82 in annotation consistency and an F1-Score of 0.95 in detection performance, significantly outperforming state-of-the-art baselines. Additionally, the false positive rate is reduced to 0.04, verifying the framework’s effectiveness in suppressing alarm flooding while maintaining high recall. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Data Analysis)
Show Figures

Figure 1

23 pages, 602 KB  
Article
An Intelligent Hybrid Ensemble Model for Early Detection of Breast Cancer in Multidisciplinary Healthcare Systems
by Hasnain Iftikhar, Atef F. Hashem, Moiz Qureshi, Paulo Canas Rodrigues, S. O. Ali, Ronny Ivan Gonzales Medina and Javier Linkolk López-Gonzales
Diagnostics 2026, 16(3), 377; https://doi.org/10.3390/diagnostics16030377 - 23 Jan 2026
Viewed by 86
Abstract
Background/Objectives: In the modern healthcare landscape, breast cancer remains one of the most prevalent malignancies and a leading cause of mortality among women worldwide. Early and accurate prediction of breast cancer plays a pivotal role in effective diagnosis, treatment planning, and improving survival [...] Read more.
Background/Objectives: In the modern healthcare landscape, breast cancer remains one of the most prevalent malignancies and a leading cause of mortality among women worldwide. Early and accurate prediction of breast cancer plays a pivotal role in effective diagnosis, treatment planning, and improving survival outcomes. However, due to the complexity and heterogeneity of medical data, achieving high predictive accuracy remains a significant challenge. This study proposes an intelligent hybrid system that integrates traditional machine learning (ML), deep learning (DL), and ensemble learning approaches for enhanced breast cancer prediction using the Wisconsin Breast Cancer Dataset. Methods: The proposed system employs a multistage framework comprising three main phases: (1) data preprocessing and balancing, which involves normalization using the min–max technique and application of the Synthetic Minority Over-sampling Technique (SMOTE) to mitigate class imbalance; (2) model development, where multiple ML algorithms, DL architectures, and a novel ensemble model are applied to the preprocessed data; and (3) model evaluation and validation, performed under three distinct training–testing scenarios to ensure robustness and generalizability. Model performance was assessed using six statistical evaluation metrics—accuracy, precision, recall, F1-score, specificity, and AUC—alongside graphical analyses and rigorous statistical tests to evaluate predictive consistency. Results: The findings demonstrate that the proposed ensemble model significantly outperforms individual machine learning and deep learning models in terms of predictive accuracy, stability, and reliability. A comparative analysis also reveals that the ensemble system surpasses several state-of-the-art methods reported in the literature. Conclusions: The proposed intelligent hybrid system offers a promising, multidisciplinary approach for improving diagnostic decision support in breast cancer prediction. By integrating advanced data preprocessing, machine learning, and deep learning paradigms within a unified ensemble framework, this study contributes to the broader goals of precision oncology and AI-driven healthcare, aligning with global efforts to enhance early cancer detection and personalized medical care. Full article
15 pages, 2355 KB  
Article
Distinct Seed Endophytic Bacterial Communities Are Associated with Blast Resistance in Yongyou Hybrid Rice Varieties
by Yanbo Chen, Caiyu Lu, Zhenyu Liu, Zhixin Chen, Jianfeng Chen, Xiaomeng Zhang, Xianting Wang, Bin Ma, Houjin Lv, Huiyun Dong and Yanling Liu
Agronomy 2026, 16(3), 280; https://doi.org/10.3390/agronomy16030280 - 23 Jan 2026
Viewed by 151
Abstract
Rice blast, caused by the fungal pathogen Pyricularia oryzae, remains one of the most destructive diseases threatening global rice production. Although the deployment of resistant cultivars is widely regarded as the most effective and sustainable control strategy, resistance based solely on host [...] Read more.
Rice blast, caused by the fungal pathogen Pyricularia oryzae, remains one of the most destructive diseases threatening global rice production. Although the deployment of resistant cultivars is widely regarded as the most effective and sustainable control strategy, resistance based solely on host genetics often has limited durability due to the rapid adaptation of the pathogen. Increasing evidence suggests that plant-associated microbial communities contribute to host health and disease resistance, yet the role of seed-associated microbiota in shaping rice blast resistance remains insufficiently understood. In this study, we investigated seed endophytic bacterial communities across multiple indica–japonica hybrid rice varieties from the Yongyou series that exhibit contrasting levels of resistance to rice blast. By integrating amplicon sequencing, we identified distinct seed bacterial assemblages associated with blast-resistant and blast-susceptible varieties were identified. Notably, the microbial communities in blast-resistant varieties exhibited significantly higher Shannon index, with a median value of 3.478 compared to 2.654 in susceptible varieties (p < 0.001), indicating a greater diversity and more balanced community structure compared to those in susceptible varieties. Several bacterial taxa consistently enriched in resistant varieties showed negative ecological associations with P. oryzae, both at the local scale and across publicly available global metagenomic datasets. These findings indicate that seed endophytic bacterial communities are non-randomly structured in relation to host resistance phenotypes and may contribute to rice blast resistance through persistent ecological interactions with the pathogen. This work highlights the potential importance of seed-associated microbiota as intrinsic components of varietal resistance and provides a microbial perspective for improving durable disease resistance in rice breeding programs. Full article
Show Figures

Figure 1

31 pages, 1140 KB  
Review
A Survey of Multi-Layer IoT Security Using SDN, Blockchain, and Machine Learning
by Reorapetse Molose and Bassey Isong
Electronics 2026, 15(3), 494; https://doi.org/10.3390/electronics15030494 - 23 Jan 2026
Viewed by 142
Abstract
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across [...] Read more.
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across device, control, network, and application layers. The analysis reveals that BC technology ensures decentralised trust, immutability, and secure access validation, while SDN enables programmability, load balancing, and real-time monitoring. In addition, ML/deep learning (DL) techniques, including federated and hybrid learning, strengthen anomaly detection, predictive security, and adaptive mitigation. Reported evaluations show similar gains in detection accuracy, latency, throughput, and energy efficiency, with effective defence against threats, though differing experimental contexts limit direct comparison. It also shows that the solutions’ effectiveness depends on ecosystem factors such as SDN controllers, BC platforms, cryptographic protocols, and ML frameworks. However, most studies rely on simulations or small-scale testbeds, leaving large-scale and heterogeneous deployments unverified. Significant challenges include scalability, computational and energy overhead, dataset dependency, limited adversarial resilience, and the explainability of ML-driven decisions. Based on the findings, future research should focus on lightweight consensus mechanisms for constrained devices, privacy-preserving ML/DL, and cross-layer adversarial-resilient frameworks. Advancing these directions will be important in achieving scalable, interoperable, and trustworthy SDN-IoT/IIoT security solutions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop