Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (187)

Search Parameters:
Keywords = denial environment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 13424 KiB  
Article
A Comprehensive Analysis of Security Challenges in ZigBee 3.0 Networks
by Akbar Ghobakhlou, Duaa Zuhair Al-Hamid, Sara Zandi and James Cato
Sensors 2025, 25(15), 4606; https://doi.org/10.3390/s25154606 - 25 Jul 2025
Viewed by 262
Abstract
ZigBee, a wireless technology standard for the Internet of Things (IoT) devices based on IEEE 802.15.4, faces significant security challenges that threaten the confidentiality, integrity, and availability of its networks. Despite using 128-bit Advanced Encryption Standard (AES) with symmetric keys for node authentication [...] Read more.
ZigBee, a wireless technology standard for the Internet of Things (IoT) devices based on IEEE 802.15.4, faces significant security challenges that threaten the confidentiality, integrity, and availability of its networks. Despite using 128-bit Advanced Encryption Standard (AES) with symmetric keys for node authentication and data confidentiality, ZigBee’s design constraints, such as low cost and low power, have allowed security issues to persist. While ZigBee 3.0 introduces enhanced security features such as install codes and trust centre link key updates, there remains a lack of empirical research evaluating their effectiveness in real-world deployments. This research addresses the gap by conducting a comprehensive, hardware-based analysis of ZigBee 3.0 networks using XBee 3 radio modules and ZigBee-compatible devices. We investigate the following three core security issues: (a) the security of symmetric keys, focusing on vulnerabilities that could allow attackers to obtain these keys; (b) the impact of compromised symmetric keys on network confidentiality; and (c) susceptibility to Denial-of-Service (DoS) attacks due to insufficient protection mechanisms. Our experiments simulate realistic attack scenarios under both Centralised and Distributed Security Models to assess the protocol’s resilience. The findings reveal that while ZigBee 3.0 improves upon earlier versions, certain vulnerabilities remain exploitable. We also propose practical security controls and best practices to mitigate these attacks and enhance network security. This work contributes novel insights into the operational security of ZigBee 3.0, offering guidance for secure IoT deployments and advancing the understanding of protocol-level defences in constrained environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

31 pages, 2736 KiB  
Article
Unseen Attack Detection in Software-Defined Networking Using a BERT-Based Large Language Model
by Mohammed N. Swileh and Shengli Zhang
AI 2025, 6(7), 154; https://doi.org/10.3390/ai6070154 - 11 Jul 2025
Viewed by 602
Abstract
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target [...] Read more.
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target for various types of attacks. While the body of current research on attack detection in SDN has yielded important results, several critical gaps remain that require further exploration. Addressing challenges in feature selection, broadening the scope beyond Distributed Denial of Service (DDoS) attacks, strengthening attack decisions based on multi-flow analysis, and building models capable of detecting unseen attacks that they have not been explicitly trained on are essential steps toward advancing security measures in SDN environments. In this paper, we introduce a novel approach that leverages Natural Language Processing (NLP) and the pre-trained Bidirectional Encoder Representations from Transformers (BERT)-base-uncased model to enhance the detection of attacks in SDN environments. Our approach transforms network flow data into a format interpretable by language models, allowing BERT-base-uncased to capture intricate patterns and relationships within network traffic. By utilizing Random Forest for feature selection, we optimize model performance and reduce computational overhead, ensuring efficient and accurate detection. Attack decisions are made based on several flows, providing stronger and more reliable detection of malicious traffic. Furthermore, our proposed method is specifically designed to detect previously unseen attacks, offering a solution for identifying threats that the model was not explicitly trained on. To rigorously evaluate our approach, we conducted experiments in two scenarios: one focused on detecting known attacks, achieving an accuracy, precision, recall, and F1-score of 99.96%, and another on detecting previously unseen attacks, where our model achieved 99.96% in all metrics, demonstrating the robustness and precision of our framework in detecting evolving threats, and reinforcing its potential to improve the security and resilience of SDN networks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Network Management)
Show Figures

Figure 1

21 pages, 4241 KiB  
Article
Federated Learning-Driven Cybersecurity Framework for IoT Networks with Privacy Preserving and Real-Time Threat Detection Capabilities
by Milad Rahmati and Antonino Pagano
Informatics 2025, 12(3), 62; https://doi.org/10.3390/informatics12030062 - 4 Jul 2025
Cited by 1 | Viewed by 770
Abstract
The rapid expansion of the Internet of Things (IoT) ecosystem has transformed industries but also exposed significant cybersecurity vulnerabilities. Traditional centralized methods for securing IoT networks struggle to balance privacy preservation with real-time threat detection. This study presents a Federated Learning-Driven Cybersecurity Framework [...] Read more.
The rapid expansion of the Internet of Things (IoT) ecosystem has transformed industries but also exposed significant cybersecurity vulnerabilities. Traditional centralized methods for securing IoT networks struggle to balance privacy preservation with real-time threat detection. This study presents a Federated Learning-Driven Cybersecurity Framework designed for IoT environments, enabling decentralized data processing through local model training on edge devices to ensure data privacy. Secure aggregation using homomorphic encryption supports collaborative learning without exposing sensitive information. The framework employs GRU-based recurrent neural networks (RNNs) for anomaly detection, optimized for resource-constrained IoT networks. Experimental results demonstrate over 98% accuracy in detecting threats such as distributed denial-of-service (DDoS) attacks, with a 20% reduction in energy consumption and a 30% reduction in communication overhead, showcasing the framework’s efficiency over traditional centralized approaches. This work addresses critical gaps in IoT cybersecurity by integrating federated learning with advanced threat detection techniques. It offers a scalable, privacy-preserving solution for diverse IoT applications, with future directions including blockchain integration for model aggregation traceability and quantum-resistant cryptography to enhance security. Full article
Show Figures

Figure 1

31 pages, 28041 KiB  
Article
Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA
by Mustafa Sakhai, Kaung Sithu, Min Khant Soe Oke and Maciej Wielgosz
Appl. Sci. 2025, 15(13), 7493; https://doi.org/10.3390/app15137493 - 3 Jul 2025
Viewed by 517
Abstract
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically [...] Read more.
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically implement seven attack vectors in the CARLA simulator—salt and pepper noise, event flooding, depth map tampering, LiDAR phantom injection, GPS spoofing, denial of service, and steering bias control—and measure their impact on a state-of-the-art end-to-end driving agent. We then equip each sensor with tailored defenses (e.g., adaptive median filtering for RGB and spatial clustering for DVS) and integrate a unsupervised anomaly detector (EfficientAD from anomalib) trained exclusively on benign data. Our detector achieves clear separation between normal and attacked conditions (mean RGB anomaly scores of 0.00 vs. 0.38; DVS: 0.61 vs. 0.76), yielding over 95% detection accuracy with fewer than 5% false positives. Defense evaluations reveal that GPS spoofing is fully mitigated, whereas RGB- and depth-based attacks still induce 30–45% trajectory drift despite filtering. Notably, our research-focused evaluation of DVS sensors suggests potential intrinsic resilience advantages in high-dynamic-range scenarios, though their asynchronous output necessitates carefully tuned thresholds. These findings underscore the critical role of multi-modal anomaly detection and demonstrate that DVS sensors exhibit greater intrinsic resilience in high-dynamic-range scenarios, suggesting their potential to enhance AV cybersecurity when integrated with conventional sensors. Full article
(This article belongs to the Special Issue Intelligent Autonomous Vehicles: Development and Challenges)
Show Figures

Figure 1

17 pages, 2101 KiB  
Article
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Viewed by 430
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. [...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

37 pages, 18679 KiB  
Article
Real-Time DDoS Detection in High-Speed Networks: A Deep Learning Approach with Multivariate Time Series
by Drixter V. Hernandez, Yu-Kuen Lai and Hargyo T. N. Ignatius
Electronics 2025, 14(13), 2673; https://doi.org/10.3390/electronics14132673 - 1 Jul 2025
Viewed by 480
Abstract
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. [...] Read more.
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. In contrast, packet-based approaches are prone to high false-positive rates and limited attack classification, resulting in delayed mitigation responses. To address these limitations, we propose a real-time DDoS detection architecture that combines hardware-accelerated statistical preprocessing with GPU-accelerated deep learning models. The raw packet header information is transformed into multivariate time series data to enable classification of complex traffic patterns using Temporal Convolutional Networks (TCN), Long Short-Term Memory (LSTM) networks, and Transformer architectures. We evaluated the proposed system using experiments conducted under low to high-volume background traffic to validate each model’s robustness and adaptability in a real-time network environment. The experiments are conducted across different time window lengths to determine the trade-offs between detection accuracy and latency. The results show that larger observation windows improve detection accuracy using TCN and LSTM models and consistently outperform the Transformer in high-volume scenarios. Regarding model latency, TCN and Transformer exhibit constant latency across all window sizes. We also used SHAP (Shapley Additive exPlanations) analysis to identify the most discriminative traffic features, enhancing model interpretability and supporting feature selection for computational efficiency. Among the experimented models, TCN achieves the most balance between detection performance and latency, making it an applicable model for the proposed architecture. These findings validate the feasibility of the proposed architecture and support its potential as a real-time DDoS detection application in a realistic high-speed network. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

21 pages, 2109 KiB  
Article
Securing IoT Communications via Anomaly Traffic Detection: Synergy of Genetic Algorithm and Ensemble Method
by Behnam Seyedi and Octavian Postolache
Sensors 2025, 25(13), 4098; https://doi.org/10.3390/s25134098 - 30 Jun 2025
Viewed by 302
Abstract
The rapid growth of the Internet of Things (IoT) has revolutionized various industries by enabling interconnected devices to exchange data seamlessly. However, IoT systems face significant security challenges due to decentralized architectures, resource-constrained devices, and dynamic network environments. These challenges include denial-of-service (DoS) [...] Read more.
The rapid growth of the Internet of Things (IoT) has revolutionized various industries by enabling interconnected devices to exchange data seamlessly. However, IoT systems face significant security challenges due to decentralized architectures, resource-constrained devices, and dynamic network environments. These challenges include denial-of-service (DoS) attacks, anomalous network behaviors, and data manipulation, which threaten the security and reliability of IoT ecosystems. New methods based on machine learning have been reported in the literature, addressing topics such as intrusion detection and prevention. This paper proposes an advanced anomaly detection framework for IoT networks expressed in several phases. In the first phase, data preprocessing is conducted using techniques like the Median-KS Test to remove noise, handle missing values, and balance datasets, ensuring a clean and structured input for subsequent phases. The second phase focuses on optimal feature selection using a Genetic Algorithm enhanced with eagle-inspired search strategies. This approach identifies the most significant features, reduces dimensionality, and enhances computational efficiency without sacrificing accuracy. In the final phase, an ensemble classifier combines the strengths of the Decision Tree, Random Forest, and XGBoost algorithms to achieve the accurate and robust detection of anomalous behaviors. This multi-step methodology ensures adaptability and scalability in handling diverse IoT scenarios. The evaluation results demonstrate the superiority of the proposed framework over existing methods. It achieves a 12.5% improvement in accuracy (98%), a 14% increase in detection rate (95%), a 9.3% reduction in false positive rate (10%), and a 10.8% decrease in false negative rate (5%). These results underscore the framework’s effectiveness, reliability, and scalability for securing real-world IoT networks against evolving cyber threats. Full article
Show Figures

Figure 1

29 pages, 838 KiB  
Article
Blockchain-Based Secure Authentication Protocol for Fog-Enabled IoT Environments
by Taehun Kim, Deokkyu Kwon, Yohan Park and Youngho Park
Mathematics 2025, 13(13), 2142; https://doi.org/10.3390/math13132142 - 30 Jun 2025
Viewed by 281
Abstract
Fog computing technology grants computing and storage resources to nearby IoT devices, enabling a fast response and ensuring data locality. Thus, fog-enabled IoT environments provide real-time and convenient services to users in healthcare, agriculture, and road traffic monitoring. However, messages are exchanged on [...] Read more.
Fog computing technology grants computing and storage resources to nearby IoT devices, enabling a fast response and ensuring data locality. Thus, fog-enabled IoT environments provide real-time and convenient services to users in healthcare, agriculture, and road traffic monitoring. However, messages are exchanged on public channels, which can be targeted to various security attacks. Hence, secure authentication protocols are critical for reliable fog-enabled IoT services. In 2024, Harbi et al. proposed a remote user authentication protocol for fog-enabled IoT environments. They claimed that their protocol can resist various security attacks and ensure session key secrecy. Unfortunately, we have identified several vulnerabilities in their protocol, including to insider, denial of service (DoS), and stolen verifier attacks. We also prove that their protocol does not ensure user untraceability and that it has an authentication problem. To address the security problems of their protocol, we propose a security-enhanced blockchain-based secure authentication protocol for fog-enabled IoT environments. We demonstrate the security robustness of the proposed protocol via informal and formal analyses, including Burrows–Abadi–Needham (BAN) logic, the Real-or-Random (RoR) model, and Automated Verification of Internet Security Protocols and Applications (AVISPA) simulation. Moreover, we compare the proposed protocol with related protocols to demonstrate the excellence of the proposed protocol in terms of efficiency and security. Finally, we conduct simulations using NS-3 to verify its real-world applicability. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

29 pages, 2303 KiB  
Article
Denial-of-Service Attacks on Permissioned Blockchains: A Practical Study
by Mohammad Pishdar, Yixing Lei, Khaled Harfoush and Jawad Manzoor
J. Cybersecur. Priv. 2025, 5(3), 39; https://doi.org/10.3390/jcp5030039 - 30 Jun 2025
Viewed by 687
Abstract
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to [...] Read more.
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to its ordering service, Membership Service Provider (MSP), peer nodes, consensus protocols, and architectural dependencies. In this research, we performed experiments on an HLF test bed to demonstrate how compromised components can be exploited to launch DoS attacks and degrade the performance and availability of the blockchain network. Key attack scenarios included manipulating block sizes to induce latency, discarding blocks to disrupt consensus, issuing malicious certificates via MSP, colluding peers to sabotage validation, flooding external clients to overwhelm resources, misconfiguring Raft consensus parameters, and disabling CouchDB to cripple data access. The experimental results reveal severe impacts on the availability, including increased latency, decreased throughput, and inaccessibility of the ledger. Our findings emphasize the need for proactive monitoring and robust defense mechanisms to detect and mitigate DoS threats. Finally, we discuss some future research directions, including lightweight machine learning tailored to HLF, enhanced monitoring by aggregating logs from multiple sources, and collaboration with industry stakeholders to deploy pilot studies of security-enhanced HLF in operational environments. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

16 pages, 499 KiB  
Article
Adaptive Sampling Framework for Imbalanced DDoS Traffic Classification
by Hongjoong Kim, Deokhyeon Ham and Kyoung-Sook Moon
Sensors 2025, 25(13), 3932; https://doi.org/10.3390/s25133932 - 24 Jun 2025
Viewed by 440
Abstract
Imbalanced data is a major challenge in network security applications, particularly in DDoS (Distributed Denial of Service) traffic classification, where detecting minority classes is critical for timely and cost-effective defense. Existing machine learning and deep learning models often fail to accurately classify such [...] Read more.
Imbalanced data is a major challenge in network security applications, particularly in DDoS (Distributed Denial of Service) traffic classification, where detecting minority classes is critical for timely and cost-effective defense. Existing machine learning and deep learning models often fail to accurately classify such underrepresented attack types, leading to significant degradation in performance. In this study, we propose an adaptive sampling strategy that combines oversampling and undersampling techniques to address the class imbalance problem at the data level. We evaluated our approach using benchmark DDoS traffic datasets, where it demonstrated improved classification performance across key metrics, including accuracy, recall, and F1-score, compared to baseline models and conventional sampling methods. The results indicate that the proposed adaptive sampling approach improved minority class detection performance under the tested conditions, thereby improving the reliability of sensor-driven security systems. This work contributes a robust and adaptable method for imbalanced data classification, with potential applications across simulated sensor environments where anomaly detection is essential. Full article
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors 2025)
Show Figures

Figure 1

36 pages, 6279 KiB  
Article
Eel and Grouper Optimization-Based Fuzzy FOPI-TIDμ-PIDA Controller for Frequency Management of Smart Microgrids Under the Impact of Communication Delays and Cyberattacks
by Kareem M. AboRas, Mohammed Hamdan Alshehri and Ashraf Ibrahim Megahed
Mathematics 2025, 13(13), 2040; https://doi.org/10.3390/math13132040 - 20 Jun 2025
Cited by 1 | Viewed by 486
Abstract
In a smart microgrid (SMG) system that deals with unpredictable loads and incorporates fluctuating solar and wind energy, it is crucial to have an efficient method for controlling frequency in order to balance the power between generation and load. In the last decade, [...] Read more.
In a smart microgrid (SMG) system that deals with unpredictable loads and incorporates fluctuating solar and wind energy, it is crucial to have an efficient method for controlling frequency in order to balance the power between generation and load. In the last decade, cyberattacks have become a growing menace, and SMG systems are commonly targeted by such attacks. This study proposes a framework for the frequency management of an SMG system using an innovative combination of a smart controller (i.e., the Fuzzy Logic Controller (FLC)) with three conventional cascaded controllers, including Fractional-Order PI (FOPI), Tilt Integral Fractional Derivative (TIDμ), and Proportional Integral Derivative Acceleration (PIDA). The recently released Eel and Grouper Optimization (EGO) algorithm is used to fine-tune the parameters of the proposed controller. This algorithm was inspired by how eels and groupers work together and find food in marine ecosystems. The Integral Time Squared Error (ITSE) of the frequency fluctuation (ΔF) around the nominal value is used as an objective function for the optimization process. A diesel engine generator (DEG), renewable sources such as wind turbine generators (WTGs), solar photovoltaics (PVs), and storage components such as flywheel energy storage systems (FESSs) and battery energy storage systems (BESSs) are all included in the SMG system. Additionally, electric vehicles (EVs) are also installed. In the beginning, the supremacy of the adopted EGO over the Gradient-Based Optimizer (GBO) and the Smell Agent Optimizer (SAO) can be witnessed by taking into consideration the optimization process of the recommended regulator’s parameters, in addition to the optimum design of the membership functions of the fuzzy logic controller by each of these distinct algorithms. The subsequent phase showcases the superiority of the proposed EGO-based FFOPI-TIDμ-PIDA structure compared to EGO-based conventional structures like PID and EGO-based intelligent structures such as Fuzzy PID (FPID) and Fuzzy PD-(1 + PI) (FPD-(1 + PI)); this is across diverse symmetry operating conditions and in the presence of various cyberattacks that result in a denial of service (DoS) and signal transmission delays. Based on the simulation results from the MATLAB/Simulink R2024b environment, the presented control methodology improves the dynamics of the SMG system by about 99.6% when compared to the other three control methodologies. The fitness function dropped to 0.00069 for the FFOPI-TIDμ-PIDA controller, which is about 200 times lower than the other controllers that were compared. Full article
(This article belongs to the Special Issue Mathematical Methods Applied in Power Systems, 2nd Edition)
Show Figures

Figure 1

21 pages, 2734 KiB  
Article
Quantifying Cyber Resilience: A Framework Based on Availability Metrics and AUC-Based Normalization
by Harksu Cho, Ji-Hyun Sung, Hye-Jin Kang, Jisoo Jang and Dongkyoo Shin
Electronics 2025, 14(12), 2465; https://doi.org/10.3390/electronics14122465 - 17 Jun 2025
Viewed by 475
Abstract
This study presents a metric selection framework and a normalization method for the quantitative assessment of cyber resilience, with a specific focus on availability as a core dimension. To develop a generalizable evaluation model, service types from 1124 organizations were categorized, and candidate [...] Read more.
This study presents a metric selection framework and a normalization method for the quantitative assessment of cyber resilience, with a specific focus on availability as a core dimension. To develop a generalizable evaluation model, service types from 1124 organizations were categorized, and candidate metrics applicable across diverse operational environments were identified. Ten quantitative metrics were derived based on five core selection criteria—objectivity, reproducibility, scalability, practicality, and relevance to resilience—while adhering to the principles of mutual exclusivity and collective exhaustiveness. To validate the framework, two availability-oriented metrics—Transactions per Second (TPS) and Connections per Second (CPS)—were empirically evaluated in a simulated denial-of-service environment using a TCP SYN flood attack scenario. The experiment included three phases: normal operation, attack, and recovery. An Area Under the Curve (AUC)-based Normalized Resilience Index (NRI) was introduced to quantify performance degradation and recovery, using each organization’s Recovery Time Objective (RTO) as a reference baseline. This approach facilitates objective, interpretable comparisons of resilience performance across systems with varying service conditions. The findings demonstrate the practical applicability of the proposed metrics and normalization technique for evaluating cyber resilience and underscore their potential in informing resilience policy development, operational benchmarking, and technical decision-making. Full article
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)
Show Figures

Figure 1

24 pages, 1347 KiB  
Article
SecFedDNN: A Secure Federated Deep Learning Framework for Edge–Cloud Environments
by Roba H. Alamir, Ayman Noor, Hanan Almukhalfi, Reham Almukhlifi and Talal H. Noor
Systems 2025, 13(6), 463; https://doi.org/10.3390/systems13060463 - 12 Jun 2025
Cited by 1 | Viewed by 1114
Abstract
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because [...] Read more.
Cyber threats that target Internet of Things (IoT) and edge computing environments are growing in scale and complexity, which necessitates the development of security solutions that are both robust and scalable while also protecting privacy. Edge scenarios require new intrusion detection solutions because traditional centralized intrusion detection systems (IDSs) lack in the protection of data privacy, create excessive communication overhead, and show limited contextual adaptation capabilities. This paper introduces the SecFedDNN framework, which combines federated deep learning (FDL) capabilities to protect edge–cloud environments from cyberattacks such as Distributed Denial of Service (DDoS), Denial of Service (DoS), and injection attacks. SecFedDNN performs edge-level pre-aggregation filtering through Layer-Adaptive Sparsified Model Aggregation (LASA) for anomaly detection while supporting balanced multi-class evaluation across federated clients. A Deep Neural Network (DNN) forms the main model that trains concurrently with multiple clients through the Federated Averaging (FedAvg) protocol while keeping raw data local. We utilized Google Cloud Platform (GCP) along with Google Colaboratory (Colab) to create five federated clients for simulating attacks on the TON_IoT dataset, which we balanced across selected attack types. Initial tests showed DNN outperformed Long Short-Term Memory (LSTM) and SimpleNN in centralized environments by providing higher accuracy at lower computational costs. Following federated training, the SecFedDNN framework achieved an average accuracy and precision above 84% and recall and F1-score above 82% across all clients with suitable response times for real-time deployment. The study proves that FDL can strengthen intrusion detection across distributed edge networks without compromising data privacy guarantees. Full article
Show Figures

Figure 1

25 pages, 1292 KiB  
Article
Trust Domain Extensions Guest Fuzzing Framework for Security Vulnerability Detection
by Eran Dahan, Itzhak Aviv and Michael Kiperberg
Mathematics 2025, 13(11), 1879; https://doi.org/10.3390/math13111879 - 4 Jun 2025
Viewed by 657
Abstract
The Intel® Trust Domain Extensions (TDX) encrypt guest memory and minimize host interactions to provide hardware-enforced isolation for sensitive virtual machines (VMs). Software vulnerabilities in the guest OS continue to pose a serious risk even as the TDX improves security against a [...] Read more.
The Intel® Trust Domain Extensions (TDX) encrypt guest memory and minimize host interactions to provide hardware-enforced isolation for sensitive virtual machines (VMs). Software vulnerabilities in the guest OS continue to pose a serious risk even as the TDX improves security against a malicious hypervisor. We suggest a comprehensive TDX Guest Fuzzing Framework that systematically explores the guest’s code paths handling untrusted inputs. Our method uses a customized coverage-guided fuzzer to target those pathways with random input mutations following integrating static analysis to identify possible attack surfaces, where the guest reads data from the host. To achieve high throughput, we also use snapshot-based virtual machine execution, which returns the guest to its pre-interaction state at the end of each fuzz iteration. We show how our framework reveals undiscovered vulnerabilities in device initialization procedures, hypercall error-handling, and random number seeding logic using a QEMU/KVM-based TDX emulator and a TDX-enabled Linux kernel. We demonstrate that a large number of vulnerabilities occur when developers implicitly rely on values supplied by a hypervisor rather than thoroughly verifying them. This study highlights the urgent need for ongoing, automated testing in private computing environments by connecting theoretical completeness arguments for coverage-guided fuzzing with real-world results on TDX-specific code. We discovered several memory corruption and concurrency weaknesses in the TDX guest OS through our coverage-guided fuzzing campaigns. These flaws ranged from nested #VE handler deadlocks to buffer overflows in paravirtual device initialization to faulty randomness-seeding logic. By exploiting these vulnerabilities, the TDX’s hardware-based memory isolation may be compromised or denial-of-service attacks may be made possible. Thus, our results demonstrate that, although the TDX offers a robust hardware barrier, comprehensive input validation and equally stringent software defenses are essential to preserving overall security. Full article
Show Figures

Figure 1

29 pages, 937 KiB  
Article
SOE: A Multi-Objective Traffic Scheduling Engine for DDoS Mitigation with Isolation-Aware Optimization
by Mingwei Zhou, Xian Mu and Yanyan Liang
Mathematics 2025, 13(11), 1853; https://doi.org/10.3390/math13111853 - 2 Jun 2025
Viewed by 529
Abstract
Distributed Denial-of-Service (DDoS) attacks generate deceptive, high-volume traffic that bypasses conventional detection mechanisms. When interception fails, effectively allocating mixed benign and malicious traffic under resource constraints becomes a critical challenge. To address this, we propose SchedOpt Engine (SOE), a scheduling framework formulated as [...] Read more.
Distributed Denial-of-Service (DDoS) attacks generate deceptive, high-volume traffic that bypasses conventional detection mechanisms. When interception fails, effectively allocating mixed benign and malicious traffic under resource constraints becomes a critical challenge. To address this, we propose SchedOpt Engine (SOE), a scheduling framework formulated as a discrete multi-objective optimization problem. The goal is to optimize four conflicting objectives: a benign traffic acceptance rate (BTAR), malicious traffic interception rate (MTIR), server load balancing, and malicious traffic isolation. These objectives are combined into a composite scalarized loss function with soft constraints, prioritizing a BTAR while maintaining flexibility. To solve this problem, we introduce MOFATA, a multi-objective extension of the Fata Morgana Algorithm (FATA) within a Pareto-based evolutionary framework. An ϵ-dominance mechanism is incorporated to improve solution granularity and diversity. Simulations under varying attack intensities and resource constraints validate the effectiveness of SOE. Results show that SOE consistently achieves a high BTAR and MTIR while balancing server loads. Under extreme attacks, SOE isolates malicious traffic to a subset of servers, preserving capacity for benign services. SOE also demonstrates strong adaptability in fluctuating attack environments, providing a practical solution for DDoS mitigation. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop