Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,107)

Search Parameters:
Keywords = deep learning for network security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2634 KiB  
Article
A Lightweight Intrusion Detection System with Dynamic Feature Fusion Federated Learning for Vehicular Network Security
by Junjun Li, Yanyan Ma, Jiahui Bai, Congming Chen, Tingting Xu and Chi Ding
Sensors 2025, 25(15), 4622; https://doi.org/10.3390/s25154622 - 25 Jul 2025
Viewed by 203
Abstract
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional [...] Read more.
The rapid integration of complex sensors and electronic control units (ECUs) in autonomous vehicles significantly increases cybersecurity risks in vehicular networks. Although the Controller Area Network (CAN) is efficient, it lacks inherent security mechanisms and is vulnerable to various network attacks. The traditional Intrusion Detection System (IDS) makes it difficult to effectively deal with the dynamics and complexity of emerging threats. To solve these problems, a lightweight vehicular network intrusion detection framework based on Dynamic Feature Fusion Federated Learning (DFF-FL) is proposed. The proposed framework employs a two-stream architecture, including a transformer-augmented autoencoder for abstract feature extraction and a lightweight CNN-LSTM–Attention model for preserving temporal and local patterns. Compared with the traditional theoretical framework of the federated learning, DFF-FL first dynamically fuses the deep feature representation of each node through the transformer attention module to realize the fine-grained cross-node feature interaction in a heterogeneous data environment, thereby eliminating the performance degradation caused by the difference in feature distribution. Secondly, based on the final loss LAEX,X^ index of each node, an adaptive weight adjustment mechanism is used to make the nodes with excellent performance dominate the global model update, which significantly improves robustness against complex attacks. Experimental evaluation on the CAN-Hacking dataset shows that the proposed intrusion detection system achieves more than 99% F1 score with only 1.11 MB of memory and 81,863 trainable parameters, while maintaining low computational overheads and ensuring data privacy, which is very suitable for edge device deployment. Full article
(This article belongs to the Section Sensor Networks)
22 pages, 9651 KiB  
Article
Self-Supervised Visual Tracking via Image Synthesis and Domain Adversarial Learning
by Gu Geng, Sida Zhou, Jianing Tang, Xinming Zhang, Qiao Liu and Di Yuan
Sensors 2025, 25(15), 4621; https://doi.org/10.3390/s25154621 - 25 Jul 2025
Viewed by 108
Abstract
With the widespread use of sensors in applications such as autonomous driving and intelligent security, stable and efficient target tracking from diverse sensor data has become increasingly important. Self-supervised visual tracking has attracted increasing attention due to its potential to eliminate reliance on [...] Read more.
With the widespread use of sensors in applications such as autonomous driving and intelligent security, stable and efficient target tracking from diverse sensor data has become increasingly important. Self-supervised visual tracking has attracted increasing attention due to its potential to eliminate reliance on costly manual annotations; however, existing methods often train on incomplete object representations, resulting in inaccurate localization during inference. In addition, current methods typically struggle when applied to deep networks. To address these limitations, we propose a novel self-supervised tracking framework based on image synthesis and domain adversarial learning. We first construct a large-scale database of real-world target objects, then synthesize training video pairs by randomly inserting these targets into background frames while applying geometric and appearance transformations to simulate realistic variations. To reduce domain shift introduced by synthetic content, we incorporate a domain classification branch after feature extraction and adopt domain adversarial training to encourage feature alignment between real and synthetic domains. Experimental results on five standard tracking benchmarks demonstrate that our method significantly enhances tracking accuracy compared to existing self-supervised approaches without introducing any additional labeling cost. The proposed framework not only ensures complete target coverage during training but also shows strong scalability to deeper network architectures, offering a practical and effective solution for real-world tracking applications. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

25 pages, 2887 KiB  
Article
Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application
by Roman Rodriguez-Aguilar, Jose-Antonio Marmolejo-Saucedo and Utku Köse
Mathematics 2025, 13(15), 2393; https://doi.org/10.3390/math13152393 - 25 Jul 2025
Viewed by 91
Abstract
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently [...] Read more.
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently utilized in deep learning applications to analyze detailed structures and organs in the body, using advanced intelligent software. However, challenges related to performance and data privacy often arise when using medical data from patients and healthcare institutions. To address these issues, new approaches have emerged, such as federated learning. This technique ensures the secure exchange of sensitive patient and institutional data. It enables machine learning or deep learning algorithms to establish a client–server relationship, whereby specific parameters are securely shared between models while maintaining the integrity of the learning tasks being executed. Federated learning has been successfully applied in medical settings, including diagnostic applications involving medical images such as MRI data. This research introduces an analytical intelligence system based on an Internet of Medical Things (IoMT) framework that employs federated learning to provide a safe and effective diagnostic solution for brain tumor identification. By utilizing specific brain MRI datasets, the model enables multiple local capsule networks (CapsNet) to achieve improved classification results. The average accuracy rate of the CapsNet model exceeds 97%. The precision rate indicates that the CapsNet model performs well in accurately predicting true classes. Additionally, the recall findings suggest that this model is effective in detecting the target classes of meningiomas, pituitary tumors, and gliomas. The integration of these components into an analytical intelligence system that supports the work of healthcare personnel is the main contribution of this work. Evaluations have shown that this approach is effective for diagnosing brain tumors while ensuring data privacy and security. Moreover, it represents a valuable tool for enhancing the efficiency of the medical diagnostic process. Full article
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)
Show Figures

Figure 1

25 pages, 16941 KiB  
Article
KAN-Sense: Keypad Input Recognition via CSI Feature Clustering and KAN-Based Classifier
by Minseok Koo and Jaesung Park
Electronics 2025, 14(15), 2965; https://doi.org/10.3390/electronics14152965 - 24 Jul 2025
Viewed by 211
Abstract
Wi-Fi sensing leverages variations in CSI (channel state information) to infer human activities in a contactless and low-cost manner, with growing applications in smart homes, healthcare, and security. While deep learning has advanced macro-motion sensing tasks, micro-motion sensing such as keypad stroke recognition [...] Read more.
Wi-Fi sensing leverages variations in CSI (channel state information) to infer human activities in a contactless and low-cost manner, with growing applications in smart homes, healthcare, and security. While deep learning has advanced macro-motion sensing tasks, micro-motion sensing such as keypad stroke recognition remains underexplored due to subtle inter-class CSI variations and significant intra-class variance. These challenges make it difficult for existing deep learning models typically relying on fully connected MLPs to accurately recognize keypad inputs. To address the issue, we propose a novel approach that combines a discriminative feature extractor with a Kolmogorov–Arnold Network (KAN)-based classifier. The combined model is trained to reduce intra-class variability by clustering features around class-specific centers. The KAN classifier learns nonlinear spline functions to efficiently delineate the complex decision boundaries between different keypad inputs with fewer parameters. To validate our method, we collect a CSI dataset with low-cost Wi-Fi devices (ESP8266 and Raspberry Pi 4) in a real-world keypad sensing environment. Experimental results verify the effectiveness and practicality of our method for keypad input sensing applications in that it outperforms existing approaches in sensing accuracy while requiring fewer parameters. Full article
Show Figures

Figure 1

19 pages, 2893 KiB  
Article
Reactive Power Optimization of a Distribution Network Based on Graph Security Reinforcement Learning
by Xu Zhang, Xiaolin Gui, Pei Sun, Xing Li, Yuan Zhang, Xiaoyu Wang, Chaoliang Dang and Xinghua Liu
Appl. Sci. 2025, 15(15), 8209; https://doi.org/10.3390/app15158209 - 23 Jul 2025
Viewed by 141
Abstract
With the increasing integration of renewable energy, the secure operation of distribution networks faces significant challenges, such as voltage limit violations and increased power losses. To address the issue of reactive power and voltage security under renewable generation uncertainty, this paper proposes a [...] Read more.
With the increasing integration of renewable energy, the secure operation of distribution networks faces significant challenges, such as voltage limit violations and increased power losses. To address the issue of reactive power and voltage security under renewable generation uncertainty, this paper proposes a graph-based security reinforcement learning method. First, a graph-enhanced neural network is designed, to extract both topological and node-level features from the distribution network. Then, a primal-dual approach is introduced to incorporate voltage security constraints into the agent’s critic network, by constructing a cost critic to guide safe policy learning. Finally, a dual-critic framework is adopted to train the actor network and derive an optimal policy. Experiments conducted on real load profiles demonstrated that the proposed method reduced the voltage violation rate to 0%, compared to 4.92% with the Deep Deterministic Policy Gradient (DDPG) algorithm and 5.14% with the Twin Delayed DDPG (TD3) algorithm. Moreover, the average node voltage deviation was effectively controlled within 0.0073 per unit. Full article
(This article belongs to the Special Issue IoT Technology and Information Security)
Show Figures

Figure 1

24 pages, 2151 KiB  
Article
Federated Learning-Based Intrusion Detection in IoT Networks: Performance Evaluation and Data Scaling Study
by Nurtay Albanbay, Yerlan Tursynbek, Kalman Graffi, Raissa Uskenbayeva, Zhuldyz Kalpeyeva, Zhastalap Abilkaiyr and Yerlan Ayapov
J. Sens. Actuator Netw. 2025, 14(4), 78; https://doi.org/10.3390/jsan14040078 - 23 Jul 2025
Viewed by 237
Abstract
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily [...] Read more.
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily focused on maximizing accuracy, they often overlook the computational limitations of IoT hardware and the feasibility of local model deployment. In this work, three deep learning architectures—a deep neural network (DNN), a convolutional neural network (CNN), and a hybrid CNN+BiLSTM—are trained using the CICIoT2023 dataset within a federated learning environment simulating up to 150 IoT devices. The study evaluates how detection accuracy, convergence speed, and inference costs (latency and model size) vary across different local data scales and model complexities. Results demonstrate that CNN achieves the best trade-off between detection performance and computational efficiency, reaching ~98% accuracy with low latency and a compact model footprint. The more complex CNN+BiLSTM architecture yields slightly higher accuracy (~99%) at a significantly greater computational cost. Deployment tests on Raspberry Pi 5 devices confirm that all three models can be effectively implemented on real-world IoT edge hardware. These findings offer practical guidance for researchers and practitioners in selecting scalable and lightweight IDS models suitable for real-world federated IoT deployments, supporting secure and efficient anomaly detection in urban IoT networks. Full article
(This article belongs to the Special Issue Federated Learning: Applications and Future Directions)
Show Figures

Figure 1

10 pages, 637 KiB  
Proceeding Paper
Improving Industrial Control System Cybersecurity with Time-Series Prediction Models
by Velizar Varbanov and Tatiana Atanasova
Eng. Proc. 2025, 101(1), 4; https://doi.org/10.3390/engproc2025101004 - 22 Jul 2025
Viewed by 135
Abstract
Traditional security detection methods struggle to identify zero-day attacks in Industrial Control Systems (ICSs), particularly within critical infrastructures (CIs) integrated with the Industrial Internet of Things (IIoT). These attacks exploit unknown vulnerabilities, leveraging the complexity of physical and digital system interconnections, making them [...] Read more.
Traditional security detection methods struggle to identify zero-day attacks in Industrial Control Systems (ICSs), particularly within critical infrastructures (CIs) integrated with the Industrial Internet of Things (IIoT). These attacks exploit unknown vulnerabilities, leveraging the complexity of physical and digital system interconnections, making them difficult to detect. The integration of legacy ICS networks with modern computing and networking technologies has expanded the attack surface, increasing susceptibility to cyber threats. Anomaly detection systems play a crucial role in safeguarding these infrastructures by identifying deviations from normal operations. This study investigates the effectiveness of deep learning-based anomaly detection models in revealing operational anomalies that could indicate potential cyber-attacks. We implemented and evaluated a hybrid deep learning architecture combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks to analyze ICS telemetry data. The CNN-LSTM model excels in identifying time-dependent anomalies and enables near real-time detection of cyber-attacks, significantly improving security monitoring capabilities for IIoT-integrated critical infrastructures. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 176
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

19 pages, 8896 KiB  
Article
Future Residential Water Use and Management Under Climate Change Using Bayesian Neural Networks
by Young-Ho Seo, Jang Hyun Sung, Joon-Seok Park, Byung-Sik Kim and Junehyeong Park
Water 2025, 17(15), 2179; https://doi.org/10.3390/w17152179 - 22 Jul 2025
Viewed by 136
Abstract
This study projects future Residential Water Use (RWU) under climate change scenarios using a Bayesian Neural Network (BNN) model that quantifies the relationship between observed temperatures and RWU. Eighteen Global Climate Models (GCMs) under the Shared Socioeconomic Pathway 5–8.5 (SSP5–8.5) scenario were used [...] Read more.
This study projects future Residential Water Use (RWU) under climate change scenarios using a Bayesian Neural Network (BNN) model that quantifies the relationship between observed temperatures and RWU. Eighteen Global Climate Models (GCMs) under the Shared Socioeconomic Pathway 5–8.5 (SSP5–8.5) scenario were used to assess the uncertainties across these models. The findings indicate that RWU in Republic of Korea (ROK) is closely linked to temperature changes, with significant increases projected in the distant future (F3), especially during summer. Under the SSP5–8.5 scenario, RWU is expected to increase by up to 10.3% by the late 21st century (2081–2100) compared to the historical baseline. The model achieved a root mean square error (RMSE) of 11,400 m3/month, demonstrating reliable predictive performance. Unlike conventional deep learning models, the BNN provides probabilistic forecasts with uncertainty bounds, enhancing its suitability for climate-sensitive resource planning. This study also projects inflows to the Paldang Dam, revealing an overall increase in future water availability. However, winter water security may decline due to decreased inflow and minimal changes in RWU. This study suggests enhancing summer precipitation storage while considering downstream flood risks. Demand management strategies are recommended for addressing future winter water security challenges. This research highlights the importance of projecting RWU under climate change scenarios and emphasizes the need for strategic water resource management in ROK. Full article
(This article belongs to the Section Water and Climate Change)
Show Figures

Figure 1

15 pages, 2900 KiB  
Article
A Three-Dimensional Convolutional Neural Network for Dark Web Traffic Classification Based on Multi-Channel Image Deep Learning
by Junwei Li, Zhisong Pan and Kaolin Jiang
Computers 2025, 14(8), 295; https://doi.org/10.3390/computers14080295 - 22 Jul 2025
Viewed by 236
Abstract
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, [...] Read more.
Dark web traffic classification is an important research direction in cybersecurity; however, traditional classification methods have many limitations. Although deep learning architectures like CNN and LSTM, as well as multi-structural fusion frameworks, have demonstrated partial success, they remain constrained by shallow feature representation, localized decision boundaries, and poor generalization capacity. To improve the prediction accuracy and classification precision of dark web traffic, we propose a novel dark web traffic classification model integrating multi-channel image deep learning and a three-dimensional convolutional neural network (3D-CNN). The proposed framework leverages spatial–temporal feature fusion to enhance discriminative capability, while the 3D-CNN structure effectively captures complex traffic patterns across multiple dimensions. The experimental results show that compared to common 2D-CNN and 1D-CNN classification models, the dark web traffic classification method based on multi-channel image visual features and 3D-CNN can improve classification by 5.1% and 3.3% while maintaining a smaller total number of parameters and feature recognition parameters, effectively reducing the computational complexity of the model. In comparative experiments, 3D-CNN validates the model’s superiority in accuracy and computational efficiency compared to state-of-the-art methods, offering a promising solution for dark web traffic monitoring and security applications. Full article
Show Figures

Figure 1

24 pages, 637 KiB  
Review
Deep Learning Network Selection and Optimized Information Fusion for Enhanced COVID-19 Detection: A Literature Review
by Olga Adriana Caliman Sturdza, Florin Filip, Monica Terteliu Baitan and Mihai Dimian
Diagnostics 2025, 15(14), 1830; https://doi.org/10.3390/diagnostics15141830 - 21 Jul 2025
Viewed by 712
Abstract
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and [...] Read more.
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and performance of DL architectures, notably convolutional neural networks (CNNs) and emerging vision transformers (ViTs), in identifying COVID-19-related lung abnormalities. Individual ResNet architectures, along with CNN models, demonstrate strong diagnostic performance through the transfer protocol; however, ViTs provide better performance, with improved readability and reduced data requirements. Multimodal diagnostic systems now incorporate alternative methods, in addition to imaging, which use lung ultrasounds, clinical data, and cough sound evaluation. Information fusion techniques, which operate at the data, feature, and decision levels, enhance diagnostic performance. However, progress in COVID-19 detection is hindered by ongoing issues stemming from restricted and non-uniform datasets, as well as domain differences in image standards and complications with both diagnostic overfitting and poor generalization capabilities. Recent developments in COVID-19 diagnosis involve constructing expansive multi-noise information sets while creating clinical process-oriented AI algorithms and implementing distributed learning protocols for securing information security and system stability. While deep learning-based COVID-19 detection systems show strong potential for clinical application, broader validation, regulatory approvals, and continuous adaptation remain essential for their successful deployment and for preparing future pandemic response strategies. Full article
Show Figures

Figure 1

16 pages, 10129 KiB  
Article
PestOOD: An AI-Enabled Solution for Advancing Grain Security via Out-of-Distribution Pest Detection
by Jida Tian, Chuanyang Ma, Jiangtao Li and Huiling Zhou
Electronics 2025, 14(14), 2868; https://doi.org/10.3390/electronics14142868 - 18 Jul 2025
Viewed by 145
Abstract
Detecting stored-grain pests on the surface of the grain pile plays an important role in integrated pest management (IPM), which is crucial for grain security. Recently, numerous deep learning-based pest detection methods have been proposed. However, a critical limitation of existing methods is [...] Read more.
Detecting stored-grain pests on the surface of the grain pile plays an important role in integrated pest management (IPM), which is crucial for grain security. Recently, numerous deep learning-based pest detection methods have been proposed. However, a critical limitation of existing methods is their inability to detect out-of-distribution (OOD) categories that are unseen during training. When encountering such objects, these methods often misclassify them as in-distribution (ID) categories. To address this challenge, we propose a one-stage framework named PestOOD for out-of-distribution stored-grain pest detection via flow-based feature reconstruction. Specifically, we propose a novel Flow-Based OOD Feature Generation (FOFG) module that generates OOD features for detector training via feature reconstruction. This helps the detector learn to recognize OOD objects more effectively. Additionally, to prevent network overfitting that may lead to an excessive focus on ID feature extraction, we propose a Noisy DropBlock (NDB) module and integrate it into the backbone network. Finally, to ensure effective network convergence, a Stage-Wise Training Strategy (STS) is proposed. We conducted extensive experiments on our previously established multi-class stored-grain pest dataset. The results show that our proposed PestOOD demonstrates superior performance over state-of-the-art methods, providing an effective AI-enabled solution to ensure grain security. Full article
Show Figures

Figure 1

24 pages, 2173 KiB  
Article
A Novel Ensemble of Deep Learning Approach for Cybersecurity Intrusion Detection with Explainable Artificial Intelligence
by Abdullah Alabdulatif
Appl. Sci. 2025, 15(14), 7984; https://doi.org/10.3390/app15147984 - 17 Jul 2025
Viewed by 443
Abstract
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and [...] Read more.
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and respond to complex and evolving attacks. To address these challenges, Artificial Intelligence and machine learning have emerged as powerful tools for enhancing the accuracy, adaptability, and automation of IDS solutions. This study presents a novel, hybrid ensemble learning-based intrusion detection framework that integrates deep learning and traditional ML algorithms with explainable artificial intelligence for real-time cybersecurity applications. The proposed model combines an Artificial Neural Network and Support Vector Machine as base classifiers and employs a Random Forest as a meta-classifier to fuse predictions, improving detection performance. Recursive Feature Elimination is utilized for optimal feature selection, while SHapley Additive exPlanations (SHAP) provide both global and local interpretability of the model’s decisions. The framework is deployed using a Flask-based web interface in the Amazon Elastic Compute Cloud environment, capturing live network traffic and offering sub-second inference with visual alerts. Experimental evaluations using the NSL-KDD dataset demonstrate that the ensemble model outperforms individual classifiers, achieving a high accuracy of 99.40%, along with excellent precision, recall, and F1-score metrics. This research not only enhances detection capabilities but also bridges the trust gap in AI-powered security systems through transparency. The solution shows strong potential for application in critical domains such as finance, healthcare, industrial IoT, and government networks, where real-time and interpretable threat detection is vital. Full article
Show Figures

Figure 1

40 pages, 2206 KiB  
Review
Toward Generative AI-Based Intrusion Detection Systems for the Internet of Vehicles (IoV)
by Isra Mahmoudi, Djallel Eddine Boubiche, Samir Athmani, Homero Toral-Cruz and Freddy I. Chan-Puc
Future Internet 2025, 17(7), 310; https://doi.org/10.3390/fi17070310 - 17 Jul 2025
Viewed by 398
Abstract
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due [...] Read more.
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due to their limited adaptability and dependency on predefined patterns. To overcome these limitations, machine learning (ML) and deep learning (DL)-based IDS have been introduced, offering better generalization and the ability to learn from data. However, these models can still struggle with zero-day attacks, require large volumes of labeled data, and may be vulnerable to adversarial examples. In response to these challenges, Generative AI-based IDS—leveraging models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers—have emerged as promising solutions that offer enhanced adaptability, synthetic data generation for training, and improved detection capabilities for evolving threats. This survey provides an overview of IoV architecture, vulnerabilities, and classical IDS techniques while focusing on the growing role of Generative AI in strengthening IoV security. It discusses the current landscape, highlights the key challenges, and outlines future research directions aimed at building more resilient and intelligent IDS for the IoV ecosystem. Full article
Show Figures

Figure 1

18 pages, 533 KiB  
Article
Comparative Analysis of Deep Learning Models for Intrusion Detection in IoT Networks
by Abdullah Waqas, Sultan Daud Khan, Zaib Ullah, Mohib Ullah and Habib Ullah
Computers 2025, 14(7), 283; https://doi.org/10.3390/computers14070283 - 17 Jul 2025
Viewed by 239
Abstract
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of [...] Read more.
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of detecting intrusions in IoT environments by evaluating the performance of deep learning (DL) models under different data and algorithmic conditions. We conducted a comparative analysis of three widely used DL models—Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Bidirectional LSTM (biLSTM)—across four benchmark IoT intrusion detection datasets: BoTIoT, CiCIoT, ToNIoT, and WUSTL-IIoT-2021. Each model was assessed under balanced and imbalanced dataset configurations and evaluated using three loss functions (cross-entropy, focal loss, and dual focal loss). By analyzing model efficacy across these datasets, we highlight the importance of generalizability and adaptability to varied data characteristics that are essential for real-world applications. The results demonstrate that the CNN trained using the cross-entropy loss function consistently outperforms the other models, particularly on balanced datasets. On the other hand, LSTM and biLSTM show strong potential in temporal modeling, but their performance is highly dependent on the characteristics of the dataset. By analyzing the performance of multiple DL models under diverse datasets, this research provides actionable insights for developing secure, interpretable IoT systems that can meet the challenges of designing a secure IoT system. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

Back to TopTop