Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,663)

Search Parameters:
Keywords = network attack

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3840 KB  
Review
Efficient and Secure GANs: A Survey on Privacy-Preserving and Resource-Aware Models
by Niovi Efthymia Apostolou, Elpida Vasiliki Balourdou, Maria Mouratidou, Eleni Tsalera, Ioannis Voyiatzis, Andreas Papadakis and Maria Samarakou
Appl. Sci. 2025, 15(20), 11207; https://doi.org/10.3390/app152011207 (registering DOI) - 19 Oct 2025
Abstract
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as [...] Read more.
Generative Adversarial Networks (GANs) generate synthetic content to support applications such as data augmentation, image-to-image translation, and training models where data availability is limited. Nevertheless, their broader deployment is constrained by limitations in data availability, high computational and energy demands, as well as privacy and security concerns. These factors restrict their scalability and integration in real-world applications. This survey provides a systematic review of research aimed at addressing these challenges. Techniques such as few-shot learning, consistency regularization, and advanced data augmentation are examined to address data scarcity. Approaches designed to reduce computational and energy costs, including hardware-based acceleration and model optimization, are also considered. In addition, strategies to improve privacy and security, such as privacy-preserving GAN architectures and defense mechanisms against adversarial attacks, are analyzed. By organizing the literature into these thematic categories, the review highlights available solutions, their trade-offs, and remaining open issues. Our findings underline the growing role of GANs in artificial intelligence, while also emphasizing the importance of efficient, sustainable, and secure designs. This work not only concentrates the current knowledge but also sets the basis for future research. Full article
(This article belongs to the Special Issue Big Data Analytics and Deep Learning for Predictive Maintenance)
44 pages, 8752 KB  
Article
DataSense: A Real-Time Sensor-Based Benchmark Dataset for Attack Analysis in IIoT with Multi-Objective Feature Selection
by Amir Firouzi, Sajjad Dadkhah, Sebin Abraham Maret and Ali A. Ghorbani
Electronics 2025, 14(20), 4095; https://doi.org/10.3390/electronics14204095 (registering DOI) - 19 Oct 2025
Abstract
The widespread integration of Internet-connected devices into industrial environments has enhanced connectivity and automation but has also increased the exposure of industrial cyber–physical systems to security threats. Detecting anomalies is essential for ensuring operational continuity and safeguarding critical assets, yet the dynamic, real-time [...] Read more.
The widespread integration of Internet-connected devices into industrial environments has enhanced connectivity and automation but has also increased the exposure of industrial cyber–physical systems to security threats. Detecting anomalies is essential for ensuring operational continuity and safeguarding critical assets, yet the dynamic, real-time nature of such data poses challenges for developing effective defenses. This paper introduces DataSense, a comprehensive dataset designed to advance security research in industrial networked environments. DataSense contains synchronized sensor and network stream data, capturing interactions among diverse industrial sensors, commonly used connected devices, and network equipment, enabling vulnerability studies across heterogeneous industrial setups. The dataset was generated through the controlled execution of 50 realistic attacks spanning seven major categories: reconnaissance, denial of service, distributed denial of service, web exploitation, man-in-the-middle, brute force, and malware. This process produced a balanced mix of benign and malicious traffic that reflects real-world conditions. To enhance its utility, we introduce an original feature selection approach that identifies features most relevant to improving detection rates while minimizing resource usage. Comprehensive experiments with a broad spectrum of machine learning and deep learning models validate the dataset’s applicability, making DataSense a valuable resource for developing robust systems for detecting anomalies and preventing intrusions in real time within industrial environments. Full article
(This article belongs to the Special Issue AI-Driven IoT: Beyond Connectivity, Toward Intelligence)
25 pages, 87854 KB  
Article
Spatiotemporal Feature Correlation with Feature Space Transformation for Intrusion Detection
by Cheng Zhang, Pengbin Hu and Lingling Tan
Appl. Sci. 2025, 15(20), 11168; https://doi.org/10.3390/app152011168 - 17 Oct 2025
Abstract
In recent years, with the continuous development of information technology, network security issues have become increasingly prominent. Intrusion detection has garnered significant attention in the field of network security protection due to its ability to detect anomalies in a timely manner. However, existing [...] Read more.
In recent years, with the continuous development of information technology, network security issues have become increasingly prominent. Intrusion detection has garnered significant attention in the field of network security protection due to its ability to detect anomalies in a timely manner. However, existing intrusion detection methods often fail to effectively capture spatiotemporal correlations in traffic and struggle with imbalanced, high-dimensional feature spaces—problems that become even more pronounced under complex network environments—ultimately leading to low identification accuracy and high false-positive rates. To address these challenges, this paper proposes a spatiotemporal correlation-based intrusion detection method that utilizes feature space transformation and Euclidean distance. Specifically, the method first considers the relationship between the characteristics of different operating systems and attack behaviors through feature space transformation and integration. Then, it constructs a graph structure between samples using Euclidean distance and captures the spatiotemporal correlations between samples by combining graph convolutional networks with bidirectional gated recurrent unit networks. Through this design, the model can deeply mine the spatial and temporal features of network traffic, thereby improving classification accuracy and detection efficiency for network attacks. Experimental results show that the proposed model significantly outperforms existing intrusion detection approaches across multiple evaluation metrics, including accuracy, weighted precision, weighted recall, and weighted F1 score. Full article
30 pages, 5198 KB  
Article
Security Authentication Scheme for Vehicle-to-Everything Computing Task Offloading Environments
by Yubao Liu, Chenhao Li, Quanchao Sun and Haiyue Jiang
Sensors 2025, 25(20), 6428; https://doi.org/10.3390/s25206428 - 17 Oct 2025
Abstract
Computational task offloading is a key technology in the field of vehicle-to-everything (V2X) communication, where security issues represent a core challenge throughout the offloading process. We must ensure the legitimacy of both the offloading entity (requesting vehicle) and the offloader (edge server or [...] Read more.
Computational task offloading is a key technology in the field of vehicle-to-everything (V2X) communication, where security issues represent a core challenge throughout the offloading process. We must ensure the legitimacy of both the offloading entity (requesting vehicle) and the offloader (edge server or assisting vehicle), as well as the confidentiality and integrity of task data during transmission and processing. To this end, we propose a security authentication scheme for the V2X computational task offloading environment. We conducted rigorous formal and informal analyses of the scheme, supplemented by verification using the formal security verification tool AVISPA. This demonstrates that the proposed scheme possesses fundamental security properties in the V2X environment, capable of resisting various threats and attacks. Furthermore, compared to other related authentication schemes, our proposed solution exhibits favorable performance in terms of computational and communication overhead. Finally, we conducted network simulations using NS-3 to evaluate the scheme’s performance at the network layer. Overall, the proposed scheme provides reliable and scalable security guarantees tailored to the requirements of computing task offloading in V2X environments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

16 pages, 6589 KB  
Article
An Enhanced Steganography-Based Botnet Communication Method in BitTorrent
by Gyeonggeun Park, Youngho Cho and Gang Qu
Electronics 2025, 14(20), 4081; https://doi.org/10.3390/electronics14204081 - 17 Oct 2025
Viewed by 45
Abstract
In a botnet attack, significant damage can occur when an attacker gains control over a large number of compromised network devices. Botnets have evolved from traditional centralized architectures to decentralized Peer-to-Peer (P2P) and hybrid forms. Recently, a steganography-based botnet (Stego-botnet) has emerged, which [...] Read more.
In a botnet attack, significant damage can occur when an attacker gains control over a large number of compromised network devices. Botnets have evolved from traditional centralized architectures to decentralized Peer-to-Peer (P2P) and hybrid forms. Recently, a steganography-based botnet (Stego-botnet) has emerged, which conceals command and control (C&C) messages within cover media such as images or video files shared over social networking sites (SNS). This type of Stego-botnet can evade conventional detection systems, as identifying hidden messages embedded in media transmitted via SNS platforms is inherently challenging. However, the inherent file size limitations of SNS platforms restrict the achievable payload capacity of such Stego-botnets. Moreover, the centralized characteristics of conventional botnet architectures expose attackers to a higher risk of identification. To overcome these challenges, researchers have explored network steganography techniques leveraging P2P networks such as BitTorrent, Google Suggest, and Skype. Among these, a hidden communication method utilizing Bitfield messages in BitTorrent has been proposed, demonstrating improved concealment compared to prior studies. Nevertheless, existing approaches still fail to achieve sufficient payload capacity relative to traditional digital steganography techniques. In this study, we extend P2P-based network steganography methods—particularly within the BitTorrent protocol—to address these limitations. We propose a novel botnet C&C communication model that employs network steganography over BitTorrent and validate its feasibility through experimental implementation. Furthermore, our results show that the proposed Stego-botnet achieves a higher payload capacity and outperforms existing Stego-botnet models in terms of both efficiency and concealment performance. Full article
Show Figures

Figure 1

25 pages, 3111 KB  
Article
Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning
by Jokha Ali, Saqib Ali, Taiseera Al Balushi and Zia Nadir
Information 2025, 16(10), 910; https://doi.org/10.3390/info16100910 - 17 Oct 2025
Viewed by 52
Abstract
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new [...] Read more.
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new networks with limited data. To address this, this paper introduces an adaptive intrusion detection framework that combines a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model with a novel transfer learning strategy. We employ a Reinforcement Learning (RL) agent to intelligently guide the fine-tuning process, which allows the IDS to dynamically adjust its parameters such as layer freezing and learning rates in real-time based on performance feedback. We evaluated our system in a realistic data-scarce scenario using only 50 labeled training samples. Our RL-Guided model achieved a final F1-score of 0.9825, significantly outperforming a standard neural fine-tuning model (0.861) and a target baseline model (0.759). Analysis of the RL agent’s behavior confirmed that it learned a balanced and effective policy for adapting the model to the target domain. We conclude that the proposed RL-guided approach creates a highly accurate and adaptive IDS that overcomes the limitations of static transfer learning methods. This dynamic fine-tuning strategy is a powerful and promising direction for building resilient cybersecurity defenses for critical infrastructure. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

22 pages, 11896 KB  
Article
Atmospheric Corrosion Kinetics and QPQ Coating Failure of 30CrMnSiA Steel Under a Deposited Salt Film
by Wenchao Li, Shilong Chen, Hui Xiao, Xiaofei Jiao, Yurong Wang, Shuwei Song, Songtao Yan and Ying Jin
Corros. Mater. Degrad. 2025, 6(4), 53; https://doi.org/10.3390/cmd6040053 - 16 Oct 2025
Viewed by 153
Abstract
Atmospheric corrosion in sand dust environments is driven by deposits that bear chloride, which sustain thin electrolyte layers on metal surfaces. We established a laboratory protocol to replicate this by extracting, formulating, and depositing a preliminary layer of mixed salts from natural dust [...] Read more.
Atmospheric corrosion in sand dust environments is driven by deposits that bear chloride, which sustain thin electrolyte layers on metal surfaces. We established a laboratory protocol to replicate this by extracting, formulating, and depositing a preliminary layer of mixed salts from natural dust onto samples, with humidity precisely set using the salt’s deliquescence behavior. Degradation was tracked with SEM/EDS, 3D profilometry, XRD, and electrochemical analysis. Bare steel showed progressive yet decelerating attack as rust evolved from discrete islands to a lamellar network; while this densification limited transport, its internal cracks and interfacial gaps trapped chlorides, sustaining activity beneath the rust. In contrast, QPQ-treated steel remained largely protected, with damage localized at coating defects as raised rust nodules, while intact regions maintained low electrochemical activity. By coupling salt chemistries derived from the field with humidity control guided by deliquescence and diagnostics across multiple scales, this study provides a reproducible laboratory pathway to predict atmospheric corrosion. Full article
Show Figures

Figure 1

24 pages, 502 KB  
Article
Exception-Driven Security: A Risk-Aware Permission Adjustment for High-Availability Embedded Systems
by Mina Soltani Siapoush and Jim Alves-Foss
Mathematics 2025, 13(20), 3304; https://doi.org/10.3390/math13203304 - 16 Oct 2025
Viewed by 188
Abstract
Real-time operating systems (RTOSs) are widely used in embedded systems to ensure deterministic task execution, predictable responses, and concurrent operations, which are crucial for time-sensitive applications. However, the growing complexity of embedded systems, increased network connectivity, and dynamic software updates significantly expand the [...] Read more.
Real-time operating systems (RTOSs) are widely used in embedded systems to ensure deterministic task execution, predictable responses, and concurrent operations, which are crucial for time-sensitive applications. However, the growing complexity of embedded systems, increased network connectivity, and dynamic software updates significantly expand the attack surface, exposing RTOSs to a variety of security threats, including memory corruption, privilege escalation, and side-channel attacks. Traditional security mechanisms often impose additional overhead that can compromise real-time guarantees. In this work, we present a Risk-aware Permission Adjustment (RPA) framework, implemented on CHERIoT RTOS, which is a CHERI-based operating system. RPA aims to detect anomalous behavior in real time, quantify security risks, and dynamically adjust permissions to mitigate potential threats. RPA maintains system continuity, enforces fine-grained access control, and progressively contains the impact of violations without interrupting critical operations. The framework was evaluated through targeted fault injection experiments, including 20 real-world CVEs and 15 abstract vulnerability classes, demonstrating its ability to mitigate both known and generalized attacks. Performance measurements indicate minimal runtime overhead while significantly reducing system downtime compared to conventional CHERIoT and FreeRTOS implementations. Full article
Show Figures

Figure 1

18 pages, 1828 KB  
Article
A Hybrid Global-Split WGAN-GP Framework for Addressing Class Imbalance in IDS Datasets
by Jisoo Jang, Taesu Kim, Hyoseng Park and Dongkyoo Shin
Electronics 2025, 14(20), 4068; https://doi.org/10.3390/electronics14204068 - 16 Oct 2025
Viewed by 122
Abstract
The continuously evolving cyber threat landscape necessitates not only resilient defense mechanisms but also the sustained capacity development of security personnel. However, conventional training pipelines are predominantly dependent on static real-world datasets, which fail to adequately reflect the diversity and dynamics of emerging [...] Read more.
The continuously evolving cyber threat landscape necessitates not only resilient defense mechanisms but also the sustained capacity development of security personnel. However, conventional training pipelines are predominantly dependent on static real-world datasets, which fail to adequately reflect the diversity and dynamics of emerging attack tactics. To address these limitations, this study employs a Wasserstein GAN with Gradient Penalty (WGAN-GP) to synthesize realistic network traffic that preserves both temporal and statistical characteristics. Using the CIC-IDS-2017 dataset, which encompasses diverse attack scenarios including brute-force, Heartbleed, botnet, DoS/DDoS, web, and infiltration attacks, two training methodologies are proposed. The first trains a single conditional WGAN-GP on the entire dataset to capture the global distribution. The second employs multiple generators tailored to individual attack types, while sharing a discriminator pretrained on the complete traffic set, thereby ensuring consistent decision boundaries across classes. The quality of the generated traffic was evaluated using a Train on Synthetic, Test on Real (TSTR) protocol with LSTM and Random Forest classifiers, along with distribution similarity measures in the embedding space. The proposed approach achieved a classification accuracy of 97.88% and a Fréchet Inception Distance (FID) score of 3.05, surpassing baseline methods by more than one percentage point. These results demonstrate that the proposed synthetic traffic generation strategy provides advantages in scalability, diversity, and privacy, thereby enriching cyber range training scenarios and supporting the development of adaptive intrusion detection systems that generalize more effectively to evolving threats. Full article
Show Figures

Figure 1

23 pages, 2593 KB  
Article
Robust Offline Reinforcement Learning Through Causal Feature Disentanglement
by Ao Ma, Peng Li and Xiaolong Su
Electronics 2025, 14(20), 4064; https://doi.org/10.3390/electronics14204064 - 16 Oct 2025
Viewed by 145
Abstract
Offline reinforcement learning suffers from critical vulnerability to data corruption from sensor noise or adversarial attacks. Recent research has achieved a lot by downweighting corrupted samples and fixing the corrupted data, while data corruption induces feature entanglement that undermines policy robustness. Existing methods [...] Read more.
Offline reinforcement learning suffers from critical vulnerability to data corruption from sensor noise or adversarial attacks. Recent research has achieved a lot by downweighting corrupted samples and fixing the corrupted data, while data corruption induces feature entanglement that undermines policy robustness. Existing methods fail to identify causal features behind performance degradation caused by corruption. To analyze causal relationships in corrupted data, we propose a method, Robust Causal Feature Disentanglement(RCFD). Our method introduces a learnable causal feature disentanglement mechanism specifically designed for reinforcement learning scenarios, integrating the CausalVAE framework to disentangle causal features governing environmental dynamics from corruption-sensitive non-causal features. Theoretically, this disentanglement confers a robustness advantage under data corruption conditions. Concurrently, causality-preserving perturbation training injects Gaussian noise solely into non-causal features to generate counterfactual samples and is enhanced by dual-path feature alignment and contrastive learning for representation invariance. A dynamic graph diagnostic module further employs graph convolutional attention networks to model spatiotemporal relationships and identify corrupted edges through structural consistency analysis, enabling precise data repair. The results exhibit highly robust performance across D4rl benchmarks under diverse data corruption conditions. This confirms that causal feature invariance helps bridge distributional gaps, promoting reliable deployment in complex real-world settings. Full article
Show Figures

Figure 1

16 pages, 1101 KB  
Article
Analysis of Complex Network Attack and Defense Game Strategies Under Uncertain Value Criterion
by Chaoqi Fu and Zhuoying Shi
Entropy 2025, 27(10), 1066; https://doi.org/10.3390/e27101066 - 14 Oct 2025
Viewed by 177
Abstract
The study of attack–defense game decision making in critical infrastructure systems confronting intelligent adversaries, grounded in complex network theory, has emerged as a prominent topic in the field of network security. Most existing research centers on game-theoretic analysis under conditions of complete information [...] Read more.
The study of attack–defense game decision making in critical infrastructure systems confronting intelligent adversaries, grounded in complex network theory, has emerged as a prominent topic in the field of network security. Most existing research centers on game-theoretic analysis under conditions of complete information and assumes that the attacker and defender share congruent criteria for evaluating target values. However, in reality, asymmetric value perception may lead to different evaluation criteria for both the offensive and defensive sides. This paper examines the game problem wherein the attacker and defender possess distinct target value evaluation criteria. The research findings reveal that both the attacker and defender have their own “advantage ranges” for value assessment, and topological heterogeneity is the reason for this phenomenon. Within their respective advantage ranges, the attacker or defender can adopt clear-cut strategies to secure optimal benefits—without needing to consider their opponents’ decisions. Outside these ranges, we explore how the attacker can leverage small-sample detection outcomes to probabilistically infer defenders’ strategies, and we further analyze the attackers’ preference strategy selections under varying acceptable security thresholds and penalty coefficients. The research results deliver more practical solutions for games involving uncertain value criteria. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

30 pages, 2764 KB  
Article
A Cloud Integrity Verification and Validation Model Using Double Token Key Distribution Model
by V. N. V. L. S. Swathi, G. Senthil Kumar and A. Vani Vathsala
Math. Comput. Appl. 2025, 30(5), 114; https://doi.org/10.3390/mca30050114 - 13 Oct 2025
Viewed by 200
Abstract
Numerous industries have begun using cloud computing. Among other things, this presents a plethora of novel security and dependability concerns. Thoroughly verifying cloud solutions to guarantee their correctness is beneficial, just like with any other computer system that is security- and correctness-sensitive. While [...] Read more.
Numerous industries have begun using cloud computing. Among other things, this presents a plethora of novel security and dependability concerns. Thoroughly verifying cloud solutions to guarantee their correctness is beneficial, just like with any other computer system that is security- and correctness-sensitive. While there has been much research on distributed system validation and verification, nobody has looked at whether verification methods used for distributed systems can be directly applied to cloud computing. To prove that cloud computing necessitates a unique verification model/architecture, this research compares and contrasts the verification needs of distributed and cloud computing. Distinct commercial, architectural, programming, and security models necessitate distinct approaches to verification in cloud and distributed systems. The importance of cloud-based Service Level Agreements (SLAs) in testing is growing. In order to ensure service integrity, users must upload their selected services and registered services to the cloud. Not only does the user fail to update the data when they should, but external issues, such as the cloud service provider’s data becoming corrupted, lost, or destroyed, also contribute to the data not becoming updated quickly enough. The data saved by the user on the cloud server must be complete and undamaged for integrity checking to be effective. Damaged data can be recovered if incomplete data is discovered after verification. A shared resource pool with network access and elastic extension is realized by optimizing resource allocation, which provides computer resources to consumers as services. The development and implementation of the cloud platform would be greatly facilitated by a verification mechanism that checks the data integrity in the cloud. This mechanism should be independent of storage services and compatible with the current basic service architecture. The user can easily see any discrepancies in the necessary data. While cloud storage does make data outsourcing easier, the security and integrity of the outsourced data are often at risk when using an untrusted cloud server. Consequently, there is a critical need to develop security measures that enable users to verify data integrity while maintaining reasonable computational and transmission overheads. A cryptography-based public data integrity verification technique is proposed in this research. In addition to protecting users’ data from harmful attacks like replay, replacement, and forgery, this approach enables third-party authorities to stand in for users while checking the integrity of outsourced data. This research proposes a Cloud Integrity Verification and Validation Model using the Double Token Key Distribution (CIVV-DTKD) model for enhancing cloud quality of service levels. The proposed model, when compared with the traditional methods, performs better in verification and validation accuracy levels. Full article
Show Figures

Figure 1

23 pages, 2499 KB  
Review
Application of Machine Learning and Deep Learning Techniques for Enhanced Insider Threat Detection in Cybersecurity: Bibliometric Review
by Hillary Kwame Ofori, Kwame Bell-Dzide, William Leslie Brown-Acquaye, Forgor Lempogo, Samuel O. Frimpong, Israel Edem Agbehadji and Richard C. Millham
Symmetry 2025, 17(10), 1704; https://doi.org/10.3390/sym17101704 - 11 Oct 2025
Viewed by 392
Abstract
Insider threats remain a persistent challenge in cybersecurity, as malicious or negligent insiders exploit legitimate access to compromise systems and data. This study presents a bibliometric review of 325 peer-reviewed publications from 2015 to 2025 to examine how machine learning (ML) and deep [...] Read more.
Insider threats remain a persistent challenge in cybersecurity, as malicious or negligent insiders exploit legitimate access to compromise systems and data. This study presents a bibliometric review of 325 peer-reviewed publications from 2015 to 2025 to examine how machine learning (ML) and deep learning (DL) techniques for insider threat detection have evolved. The analysis investigates temporal publication trends, influential authors, international collaboration networks, thematic shifts, and algorithmic preferences. Results show a steady rise in research output and a transition from traditional ML models, such as decision trees and random forests, toward advanced DL methods, including long short-term memory (LSTM) networks, autoencoders, and hybrid ML–DL frameworks. Co-authorship mapping highlights China, India, and the United States as leading contributors, while keyword analysis underscores the increasing focus on behavior-based and eXplainable AI models. Symmetry emerges as a central theme, reflected in balancing detection accuracy with computational efficiency, and minimizing false positives while avoiding false negatives. The study recommends adaptive hybrid architectures, particularly Bidirectional LSTM–Variational Auto-Encoder (BiLSTM-VAE) models with eXplainable AI, as promising solutions that restore symmetry between detection accuracy and transparency, strengthening both technical performance and organizational trust. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Artificial Intelligence for Cybersecurity)
Show Figures

Figure 1

20 pages, 2594 KB  
Article
Evaluating the Generalization Gaps of Intrusion Detection Systems Across DoS Attack Variants
by Roshan Jameel, Khyati Marwah, Sheikh Mohammad Idrees and Mariusz Nowostawski
J. Cybersecur. Priv. 2025, 5(4), 85; https://doi.org/10.3390/jcp5040085 - 11 Oct 2025
Viewed by 330
Abstract
Intrusion Detection Systems (IDS) play a vital role in safeguarding networks, yet their effectiveness is often challenged, as cyberattacks evolve in new and unexpected ways. Machine learning models, although very powerful, usually perform well only on data that closely resembles what they were [...] Read more.
Intrusion Detection Systems (IDS) play a vital role in safeguarding networks, yet their effectiveness is often challenged, as cyberattacks evolve in new and unexpected ways. Machine learning models, although very powerful, usually perform well only on data that closely resembles what they were trained on. When faced with unfamiliar traffic, they often misclassify. In this work, we examine this generalization gap by training IDS models on one Denial-of-Service (DoS) variant, DoS Hulk, and testing them against other variants such as Goldeneye, Slowloris, and Slowhttptest. Our approach combines careful preprocessing, dimensionality reduction with Principal Component Analysis (PCA), and model training using Random Forests and Deep Neural Networks. To better understand model behavior, we tuned decision thresholds beyond the default 0.5 and found that small adjustments can significantly affect results. We also applied Shapley Additive Explanations (SHAP) to shed light on which features the models rely on, revealing a tendency to focus on fixed components that do not generalize well. Finally, using Uniform Manifold Approximation and Projection (UMAP), we visualized feature distributions and observed overlaps between training and testing datasets, but these did not translate into improved detection performance. Our findings highlight an important lesson: visual or apparent similarity between datasets does not guarantee generalization, and building robust IDS requires exposure to diverse attack patterns during training. Full article
Show Figures

Figure 1

28 pages, 4006 KB  
Article
Resilience Assessment of Cascading Failures in Dual-Layer International Railway Freight Networks Based on Coupled Map Lattice
by Si Chen, Zhiwei Lin, Qian Zhang and Yinying Tang
Appl. Sci. 2025, 15(20), 10899; https://doi.org/10.3390/app152010899 - 10 Oct 2025
Viewed by 330
Abstract
The China Railway Express (China-Europe container railway freight transport) is pivotal to Eurasian freight, yet its transcontinental railway faces escalating cascading risks. We develop a coupled map lattice (CML) model representing the physical infrastructure layer and the operational traffic layer concurrently to quantify [...] Read more.
The China Railway Express (China-Europe container railway freight transport) is pivotal to Eurasian freight, yet its transcontinental railway faces escalating cascading risks. We develop a coupled map lattice (CML) model representing the physical infrastructure layer and the operational traffic layer concurrently to quantify and mitigate cascading failures. Twenty critical stations are identified by integrating TOPSIS entropy weighting with grey relational analysis in dual-layer networks. The enhanced CML embeds node-degree, edge-betweenness, and freight-flow coupling coefficients, and introduces two adaptive cargo-redistribution rules—distance-based and load-based for real-time rerouting. Extensive simulations reveal that network resilience peaks when the coupling coefficient equals 0.4. Under targeted attacks, cascading failures propagate within three to four iterations and reduce network efficiency by more than 50%, indicating the vital function of higher importance nodes. Distance-based redistribution outperforms load-based redistribution after node failures, whereas the opposite occurs after edge failures. These findings attract our attention that redundant border corridors and intelligent monitoring should be deployed, while redistribution rules and multi-tier emergency response systems should be employed according to different scenarios. The proposed methodology provides a dual-layer analytical framework for addressing cascading risks of transcontinental networks, offering actionable guidance for intelligent transportation management of international intermodal freight networks. Full article
Show Figures

Figure 1

Back to TopTop