Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,235)

Search Parameters:
Keywords = presentation attack detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1134 KB  
Article
Class-Specific GAN Augmentation for Imbalanced Intrusion Detection: A Comparative Study Using the UWF-ZeekData22 Dataset
by Asfaw Debelie, Sikha S. Bagui, Dustin Mink and Subhash C. Bagui
Future Internet 2026, 18(4), 200; https://doi.org/10.3390/fi18040200 - 10 Apr 2026
Viewed by 34
Abstract
Extreme class imbalance is a persistent obstacle for machine learning-driven intrusion detection, as rare but high-impact cyberattacks occur far less frequently than benign traffic in training data. In many real-world cybersecurity datasets, this imbalance becomes extreme, with certain attack types containing a handful [...] Read more.
Extreme class imbalance is a persistent obstacle for machine learning-driven intrusion detection, as rare but high-impact cyberattacks occur far less frequently than benign traffic in training data. In many real-world cybersecurity datasets, this imbalance becomes extreme, with certain attack types containing a handful of samples, effectively placing the problem in a few-shot learning regime. This paper presents a controlled benchmarking study of Generative Adversarial Network (GAN) objectives for synthesizing minority-class cyberattack data. Using the UWF-ZeekData22 network traffic dataset, each MITRE ATT&CK tactic is framed as a separate binary detection task, and tactic-specific GANs are trained solely on minority samples to generate synthetic attack records. Four widely used GAN variants—Vanilla GAN, Conditional GAN (cGAN), Wasserstein GAN (WGAN), and Wasserstein GAN with Gradient Penalty (WGAN-GP)—are compared under unified training steps and fixed augmentation conditions. The utility of generated data is assessed by evaluating downstream detection performance using five traditional classifiers: Logistic Regression, Support Vector Machine, k-Nearest Neighbors, Decision Tree, and Random Forest. The results indicate that GAN augmentation generally strengthens minority-class detection across tactics and models, reducing false negatives and improving recall consistency, while not systematically harming majority-class performance. However, the effectiveness of each GAN objective varies significantly with data sparsity. Specifically, simpler adversarial objectives often outperform more complex architectures by preserving discriminative feature structure, while heavily regularized models may overly smooth minority-class distributions and reduce separability. Wasserstein-based objectives provide improved training stability, but additional regularization does not consistently translate to better detection performance. Overall, the results demonstrate that in extreme-imbalance settings, GAN effectiveness is governed more by data sparsity and structure preservation than by architectural complexity. These findings establish class-specific generative augmentation as a practical strategy for intrusion detection and provide empirical guidance for selecting appropriate GAN objectives for tabular cybersecurity data under highly imbalanced conditions. Full article
Show Figures

Figure 1

29 pages, 10810 KB  
Article
Malicious Manipulation of the Setpoint in the Temperature Control System of a Heating Process Based on Resistive Electric Heating
by Jarosław Joostberens, Aurelia Rybak, Aleksandra Rybak, Piotr Toś, Artur Kozłowski and Leszek Kasprzyczak
Electronics 2026, 15(8), 1568; https://doi.org/10.3390/electronics15081568 - 9 Apr 2026
Viewed by 164
Abstract
This article presents the potential for maliciously influencing a control system by interfering with the program code of an industrial controller, using a temperature control system for a heating process based on resistive electric heating as an example. The presented attack scenarios are [...] Read more.
This article presents the potential for maliciously influencing a control system by interfering with the program code of an industrial controller, using a temperature control system for a heating process based on resistive electric heating as an example. The presented attack scenarios are crucial for the energy efficiency of electric heating systems, which is related to the issue of cybersecurity in the area of energy security. The aim of this research was to demonstrate that a cyberattack involving the malicious manipulation of the setpoint can be carried out in a manner invisible to the heating process operator and be difficult to detect using classical time-domain control quality indicators (time-response specifications). The first involves incorporating proportional elements with mutually inverted gains into the input and output of a closed-loop system. The second method is based on adding an additional transfer function Gm(s) in parallel to the control system. The difference between the correct and manipulated setpoints is introduced into the input, and the output signal is added to the actual (hidden) value of the controlled variable. In the first method, at the moment of starting the control system, there is a difference between the apparent (falsified) value and the ambient temperature. In the second method, the inclusion of an additional Gm(s) ensures that the apparent (falsified) value of the controlled variable matches the temperature at the moment of starting the system. PID control enables achieving satisfactory control quality in heating processes, which are characterized by high inertia and time delays. Compared to classical PID regulation, advanced control methods can, under certain conditions, provide better performance in terms of quality indicators. However, due to their high computational complexity and sensitivity to model uncertainty—particularly in methods relying on accurate system identification—PID controllers continue to be widely used in industrial practice. For this reason, the present study focuses on a control system based on a PID controller as a practical solution. Based on the results, it was found that the most effective manipulation occurred within the range from 0.9 to 1.1 of the actual setpoint value for both the first and second method, using a model with Tm between 5 s and 30 s. In these cases, the quality indicators referenced to the nominal values, determined for the falsified control system responses to a step change in the setpoint, were as follows: overshoot—0.97 and 1.30 (method 1), and 0.90 and 1.10 (method 2 for 5 s), 0.75 and 1.30 (method 2 for 30 s); settling time—1.06 (method 1), and 0.98 and 1.17 (method 2 for 5 s), 0.85 and 1.14 (method 2 for 30 s). The settling times determined for the system’s response to a disturbance were: 1.00 and 1.15 (method 1), and 1.13 and 1.16 (method 2 for 5 s), 1.12 and 1.02 (method 2 for 30 s). Based on the conducted analysis, it was demonstrated that the relatively simple setpoint manipulation methods presented can effectively mask the impact of malicious interference on the temperature value in the control system of a heating process. Full article
Show Figures

Figure 1

35 pages, 3162 KB  
Article
An LLM-Based Agentic Network Traffic Incident-Report Approach Towards Explainable-AI Network Defense
by Chia-Hong Chou, Arjun Sudheer and Younghee Park
J. Sens. Actuator Netw. 2026, 15(2), 32; https://doi.org/10.3390/jsan15020032 - 7 Apr 2026
Viewed by 189
Abstract
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident [...] Read more.
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident report generation via Retrieval-Augmented Generation (RAG). The system employs a three-phase architecture: (1) a lightweight Random Forest binary pre-detection, achieving 99.49% accuracy with a 6 MB model size for edge deployment; (2) ensemble classification combining Multi-Layer Perceptron, Random Forest, and XGBoost with soft voting and SHAP-based feature attribution for explainability; and (3) a ReAct-based summary agent that synthesizes classification results with external threat intelligence from Web search and scholarly databases to generate evidence-grounded incident reports. To address the challenge of evaluating non-deterministic LLM outputs, we introduce custom RAG evaluation metrics—faithfulness and groundedness implemented via the LLM-as-Judge framework. Experimental validation on the ACI IoT Network Dataset 2023 demonstrates ensemble accuracy exceeding 99.8% across 11 attack classes; perfect groundedness scores (1.0), indicating all generated claims derive from the retrieved context; and moderate faithfulness (0.64), reflecting appropriate analytical synthesis. The ensemble approach mitigates individual model weaknesses, improving the UDP Flood F1 score from 48% (MLP alone) to 95% through soft voting. This work bridges the gap between high-accuracy detection and trustworthy, actionable security analysis for automated incident-response systems. Full article
(This article belongs to the Special Issue Feature Papers in the Section of Network Security and Privacy)
Show Figures

Figure 1

30 pages, 3687 KB  
Article
Hybrid Framework for Secure Low-Power Data Encryption with Adaptive Payload Compression in Resource-Constrained IoT Systems
by You-Rak Choi, Hwa-Young Jeong and Sangook Moon
Sensors 2026, 26(7), 2253; https://doi.org/10.3390/s26072253 - 6 Apr 2026
Viewed by 339
Abstract
Resource-constrained IoT systems face a fundamental conflict between cryptographic security and energy efficiency, particularly in critical infrastructure monitoring requiring long-term autonomous operation. This study presents a hybrid framework integrating signal-adaptive compression with hardware-accelerated authenticated encryption to resolve this trade-off. The Dynamic Payload Compression [...] Read more.
Resource-constrained IoT systems face a fundamental conflict between cryptographic security and energy efficiency, particularly in critical infrastructure monitoring requiring long-term autonomous operation. This study presents a hybrid framework integrating signal-adaptive compression with hardware-accelerated authenticated encryption to resolve this trade-off. The Dynamic Payload Compression with Selective Encryption framework classifies sensor data into three SNR regimes and applies adaptive compression strategies: 24.15-fold compression for low-SNR backgrounds, 1.77-fold for transitional states, and no compression for high-SNR leak detection events. Experimental validation using 2714 acoustic sensor samples demonstrates 5.91-fold average payload reduction with 100% detection accuracy. The integration with STM32L5 hardware AES acceleration reduces power–data correlation from 0.820 to 0.041, increasing differential power analysis attack complexity from 500 to over 221,000 required traces. Compression-induced timing variance provides additional side-channel masking, burying cryptographic signals beneath a 0.00009 signal-to-noise ratio. Projected on 19,200 mAh lithium thionyl chloride batteries, the system achieves 14-year operational lifetime under realistic duty cycles, exceeding industrial requirements for critical infrastructure protection while maintaining robust security against physical attacks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

32 pages, 6150 KB  
Article
A Hybrid Digital-Twin-Based Testbed for Real-Time Manipulation of PROFINET I/O: A Practical Man-in-the-Middle Attack Implementation
by Juan V. Martín-Fraile, Jesús E. Sierra García, Nuño Basurto and Álvaro Herrero
Appl. Sci. 2026, 16(7), 3533; https://doi.org/10.3390/app16073533 - 3 Apr 2026
Viewed by 242
Abstract
This study presents a practical methodology for executing Man-in-the-Middle (MitM) attacks on industrial control systems that utilize PROFINET I/O—a communication layer that remains largely underexplored in ICS cybersecurity research. A hybrid digital-twin-based testbed is developed by integrating Siemens S7-1500 and S7-1200 PLCs with [...] Read more.
This study presents a practical methodology for executing Man-in-the-Middle (MitM) attacks on industrial control systems that utilize PROFINET I/O—a communication layer that remains largely underexplored in ICS cybersecurity research. A hybrid digital-twin-based testbed is developed by integrating Siemens S7-1500 and S7-1200 PLCs with a process replica implemented in PCSimu, together with a malicious application that modifies specific process data before it is delivered through the PROFINET I/O channel, enabling controlled falsification of process information in real time. The attacker operates through a Modbus TCP control channel while injecting the manipulated values into the 40-byte Real-Time Class 1 (RTC1) cyclic process-data payload while preserving frame integrity and protocol-level validity indicators. Experimental results show that SDU-level modifications on the 2-ms RTC1 cycle produced deterministic and fully reproducible effects on PLC-level behavior, including forced actuator confirmations and falsified process states, demonstrating the feasibility of both DI- and DO-level manipulation scenarios. Network captures and MSSQL-based event logs provide bit-level correlation between the injected SDU modifications and their impact on the automation sequence, confirming the reliability of the proposed manipulation mechanism. The testbed also supports the systematic generation of labeled datasets for training and evaluating machine-learning-based intrusion and anomaly-detection methods, and offers direct applicability to research, education, and operator-training activities in industrial cybersecurity. Overall, the proposed platform offers a secure, reproducible, and practically applicable environment for vulnerability assessment, attack simulation, and the development of detection techniques in industrial PROFINET networks. Full article
Show Figures

Figure 1

28 pages, 1021 KB  
Article
Cost-Aware Network Traffic Anomaly Detection with Histogram-Based Gradient Boosting
by Dariusz Żelasko
Appl. Sci. 2026, 16(7), 3496; https://doi.org/10.3390/app16073496 - 3 Apr 2026
Viewed by 208
Abstract
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting [...] Read more.
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting (HGB), with a particular focus on cost-aware threshold selection on a validation split for representative operating regimes wFP:wFN{1:1, 1:2, 1:3, 1:4, 1:5, 1:10}—treated as scenario-based proxies for varying risk posture, attack severity, and analyst workload rather than as universally fixed costs—and on the role of isotonic calibration. The results indicate that (i) under 1:1, the cost-optimal operating point aligns with the F1/MCC optimum; (ii) for 1:k cost regimes, the optimum shifts to lower thresholds, reducing FN at the expense of FP and increasing the alert rate; and (iii) isotonic calibration improves PR/ROC (ranking separation), but in the reported 1:5 experiment it did not reduce the final TEST-set operational cost relative to the uncalibrated run, despite using a separately selected post-calibration threshold. The evaluation includes PR/ROC curves, Cost–Threshold and Alert–Threshold sweeps, per-class recall, and permutation importance. In addition, the proposed approach is compared with unsupervised baselines (Isolation Forest, LOF). The results provide practical guidance for SOC decisions on how to choose thresholds consistent with alert budgets and risk profiles. In deployment, these operating points can be indexed to context (e.g., user type, service class, or time of day), yielding a small library of adaptive thresholds rather than one immutable global threshold. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 741 KB  
Review
A Review of Tools and Technologies to Combat Deepfakes
by Dmitry Erokhin and Nadejda Komendantova
Information 2026, 17(4), 347; https://doi.org/10.3390/info17040347 - 3 Apr 2026
Viewed by 421
Abstract
Deepfakes and adjacent synthetic-media capabilities have become a systemic challenge for information integrity, security, and digital trust. Countermeasures now span passive detection methods that infer manipulation from content traces, active provenance systems that cryptographically bind metadata to media, and watermarking approaches that embed [...] Read more.
Deepfakes and adjacent synthetic-media capabilities have become a systemic challenge for information integrity, security, and digital trust. Countermeasures now span passive detection methods that infer manipulation from content traces, active provenance systems that cryptographically bind metadata to media, and watermarking approaches that embed detectable signals into content or generative processes. This review presents a rigorous synthesis of tools and technologies to combat deepfakes across modalities (image, video, audio, and selected multimodal settings), drawing primarily from the peer-reviewed literature, standardized benchmarks, and official technical specifications and reports. The review analyzes detection methods, provenance and authentication technologies, with emphasis on cryptographic manifests and threat models, watermarking and content provenance, including diffusion-era watermarking and industrial deployments, adversarial robustness and attacker adaptation, datasets and benchmarks, evaluation metrics across tasks, and deployment and scalability constraints. A dedicated section addresses legal, ethical, and policy issues, focusing on emerging transparency obligations and platform governance. The review finds that no single countermeasure is sufficient in realistic adversarial settings. The strongest practical approach is a layered defense that combines provenance, watermarking, content-based detection, and human oversight. The study concludes with limitations of the current evidence base and prioritized research directions to improve generalization, interoperability, and trustworthy user experiences. Full article
(This article belongs to the Special Issue Surveys in Information Systems and Applications)
Show Figures

Graphical abstract

19 pages, 712 KB  
Article
Federated Learning-Driven Protection Against Adversarial Agents in a ROS2 Powered Edge-Device Swarm Environment
by Brenden Preiss and George Pappas
AI 2026, 7(4), 127; https://doi.org/10.3390/ai7040127 - 1 Apr 2026
Viewed by 312
Abstract
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments [...] Read more.
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments where detection methods must operate under strict computational and communication constraints. This paper presents a practical, real-world federated learning framework that enhances robustness to adversarial agents in a ROS2-based edge-device swarm environment. The proposed system integrates the Federated Averaging (FedAvg) algorithm with a lightweight average cosine similarity-based filtering method to detect and suppress harmful model updates during aggregation. Unlike prior work that primarily evaluates poisoning defenses in simulated environments, this framework is implemented and evaluated on physical hardware, consisting of a laptop-based aggregator and multiple Raspberry Pi worker nodes. A convolutional neural network (CNN) based on the MobileNetV3-Small architecture is trained on the MNIST dataset, with one worker executing a sign-flipping model poisoning attack. Experimental results show that FedAvg alone fails to maintain meaningful model accuracy under adversarial conditions, resulting in near-random classification performance with a final global model accuracy of 11% and a loss of 2.3. In contrast, the integration of cosine similarity filtering demonstrates effective detection of sign-flipping model poisoning in the evaluated ROS2 swarm experiment, allowing the global model to maintain model accuracy of around 90% and loss around 0.37, which is close to baseline accuracy of 93% of the FedAvg algorithm only under no attack with a very minimal increase in loss, despite the presence of an attacker. The proposed method also maintains a false positive rate (FPR) of around 0.01 and a false negative rate (FNR) of around 0.10 of the global model in the presence of an attacker, which is a minimal difference from the baseline FedAvg-only results of around 0.008 for FPR and 0.07 for FNR. Additionally, the proposed method of FedAvg + cosine similarity filtering maintains computational statistics similar to baseline FedAvg with no attacker. Baseline results show an average runtime of about 34 min, while our proposed method shows an average runtime of about 35 min. Also, the average size of the global model being shared among workers remains consistent at around 7.15 megabytes, showing little to no increase in message payload sizes between baseline results and our proposed method. These results demonstrate that computationally lightweight cosine similarity-based detection methods can be effectively deployed in real-world, resource-constrained robotic swarm environments, providing a practical path toward improving robustness in real-world federated learning deployments beyond simulation-based evaluation. Full article
Show Figures

Figure 1

30 pages, 721 KB  
Review
A Review of Honeypots: Fingerprinting Techniques, Detection, and Evasion Mechanisms
by Arooj Chaudhry, Casper Andersen, Gaurav Choudhary and Nicola Dragoni
Future Internet 2026, 18(4), 190; https://doi.org/10.3390/fi18040190 - 1 Apr 2026
Viewed by 364
Abstract
Honeypot fingerprinting poses a significant threat in cybersecurity, as attackers who are able to identify honeypot systems can successfully evade them, thereby greatly reducing their overall effectiveness as defensive and intelligence-gathering tools. Over the years, numerous studies have proposed a variety of analytical [...] Read more.
Honeypot fingerprinting poses a significant threat in cybersecurity, as attackers who are able to identify honeypot systems can successfully evade them, thereby greatly reducing their overall effectiveness as defensive and intelligence-gathering tools. Over the years, numerous studies have proposed a variety of analytical techniques and countermeasures to minimize honeypot fingerprinting and improve honeypot stealth. This paper presents a comprehensive examination of the methods and strategies that attackers employ to detect and fingerprint honeypot systems, including behavioural, network-based, and system-level indicators. In addition, this paper analyzes common vulnerabilities inherent in both low-interaction and high-interaction honeypots that facilitate successful fingerprinting. Existing anti-detection and obfuscation techniques are evaluated for their effectiveness and limitations. Specifically, this paper offers a structured analysis of honeypot fingerprinting techniques, examines attackers’ probing strategies, evaluates the most vulnerable protocol artifacts, and outlines mitigation strategies to reduce the likelihood of honeypot detection. Finally, this paper discusses how emerging technologies and increasingly complex computing environments, such as cloud infrastructure and virtualization, impact honeypot deployment, and it highlights open challenges and promising future research directions in the field of honeypot anti-fingerprinting. Full article
Show Figures

Figure 1

41 pages, 4416 KB  
Article
A Novel Approach to Sybil Attack Detection in VANETs Using Verifiable Delay Functions and Hierarchical Fog-Cloud Architecture
by Habiba Hadri, Mourad Ouadou and Khalid Minaoui
J. Cybersecur. Priv. 2026, 6(2), 59; https://doi.org/10.3390/jcp6020059 - 1 Apr 2026
Viewed by 347
Abstract
Vehicular Ad Hoc Networks (VANETs) have become the foundation for the implementation of intelligent transportation systems and new vistas for road safety and traffic efficiency. However, these networks are still susceptible to Sybil attacks, a form of attack that requires malicious entities to [...] Read more.
Vehicular Ad Hoc Networks (VANETs) have become the foundation for the implementation of intelligent transportation systems and new vistas for road safety and traffic efficiency. However, these networks are still susceptible to Sybil attacks, a form of attack that requires malicious entities to create a series of fake identities in order to have an out-of-proportion influence. The present paper puts forth a new Sybil attack detection framework that combines Verifiable Delay Functions (VDFs) in synergistic cooperation with a hierarchical fog-cloud computing structure. Our method does not rely on any additional properties of VDFs but uses them to prove uniqueness computationally, deploying purposefully placed fog nodes for effective localized detection. We mathematically formulate a multi-layered detection algorithm that processes interactions between vehicles on two fog (and cloud) layers to produce suspicion scores using spatiotemporal consistency and VDF challenge-response patterns. Security analysis proves the system’s ability to resist a range of Sybil attack variants with performance evaluation outperforming at detection above 97.8% and false positives below 2.3%. The incorporation of machine learning techniques also extends detection capabilities, and our hybrid VDF-ML method proves better adaptation to the changing attack patterns. Details of implementation and detailed simulations in various traffic situations prove the feasibility and efficiency of our proposed solution to set a new level playing ground for secure VANET communications. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

23 pages, 1208 KB  
Article
NeSySwarm-IDS: End-to-End Differentiable Neuro-Symbolic Logic for Privacy-Preserving Intrusion Detection in UAV Swarms
by Gang Yang, Lin Ni, Tao Xia, Qinfang Shi and Jiajian Li
Appl. Sci. 2026, 16(7), 3204; https://doi.org/10.3390/app16073204 - 26 Mar 2026
Viewed by 287
Abstract
Unmanned Aerial Vehicle (UAV) swarms operating in contested environments face a critical “semantic gap” between raw, high-velocity network traffic and high-level mission security constraints, compounded by the risk of privacy leakage during collaborative learning. Existing deep learning (DL)-based Network Intrusion Detection Systems (NIDSs) [...] Read more.
Unmanned Aerial Vehicle (UAV) swarms operating in contested environments face a critical “semantic gap” between raw, high-velocity network traffic and high-level mission security constraints, compounded by the risk of privacy leakage during collaborative learning. Existing deep learning (DL)-based Network Intrusion Detection Systems (NIDSs) suffer from opacity, prohibitive resource consumption, and vulnerability to gradient leakage attacks in federated settings, while traditional rule-based systems fail to handle encrypted payloads and evolving attack patterns. To bridge this gap, we present NeSySwarm-IDS (Neuro-Symbolic Swarm Intrusion Detection System), an end-to-end differentiable neuro-symbolic framework that simultaneously achieves high accuracy, strong privacy guarantees, and built-in interpretability under resource constraints. NeSySwarm-IDS integrates an extremely lightweight 1D convolutional neural network with a differentiable Łukasiewicz fuzzy logic reasoner incorporating attack-specific rules. By aggregating only low-dimensional logic rule weights with calibrated differential privacy noise, we drastically reduce communication overhead while providing (ϵ,δ)-DP guarantees with negligible utility loss. Extensive experiments on the UAV-NIDD dataset and our self-collected dataset demonstrate that NeSySwarm-IDS achieves near-perfect detection accuracy, significantly outperforming traditional machine learning baselines despite using limited training data. A detailed case study on GPS spoofing confirms the interpretability of our approach, providing axiomatic explanations suitable for autonomous mission verification. These results establish that end-to-end neuro-symbolic learning can effectively bridge the semantic gap in UAV swarm security while ensuring privacy and interpretability, offering a practical pathway for deploying trustworthy AI in contested environments. Full article
(This article belongs to the Special Issue Cyberspace Security Technology in Computer Science)
Show Figures

Figure 1

19 pages, 492 KB  
Article
Human-Executable Algorithms for Phishing Avoidance
by Paul A. Gagniuc, Ana Apetroaiei, Marius Claudiu Langa, Adriana Nicoleta Lazar, Ionut Marius Bulgaru, Maria-Iuliana Dascalu and Ionel-Bujorel Pavaloiu
Algorithms 2026, 19(4), 250; https://doi.org/10.3390/a19040250 - 25 Mar 2026
Viewed by 202
Abstract
Phishing attacks remain effective because they exploit human decisions at the moment of action, often before automated defenses intervene. Established countermeasures focus on detection systems or awareness campaigns but rarely provide non-expert users with a formally specified decision procedure. This work presents a [...] Read more.
Phishing attacks remain effective because they exploit human decisions at the moment of action, often before automated defenses intervene. Established countermeasures focus on detection systems or awareness campaigns but rarely provide non-expert users with a formally specified decision procedure. This work presents a lightweight, deterministic phishing avoidance algorithm that users can execute without specialized tools. The algorithm evaluates a finite set of observable indicators and applies a monotonic risk score to produce allow, caution, or block decisions. Formal properties of the procedure include monotonicity, bounded complexity, and decision traceability. A controlled study with 96 participants and 72 messages per participant showed that algorithm use increased mean classification accuracy from 68.4% to 84.7% and reduced the false-negative rate from 31.9% to 11.3%. Median decision time rose from 6.2 s to 8.7 s. These results show that phishing avoidance can be expressed as a human-executable algorithm rather than as advisory guidance, and that structured decision rules can measurably improve user level security outcomes. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

26 pages, 2242 KB  
Article
A Multi-Source Feedback-Driven Framework for Generating WAF Test Cases
by Pengcheng Lu, Xiaofeng Zhong, Wenbo Xu and Yongjie Wang
Future Internet 2026, 18(3), 167; https://doi.org/10.3390/fi18030167 - 20 Mar 2026
Viewed by 240
Abstract
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive [...] Read more.
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive opaque WAF rule logic; (2) boolean feedback from WAFs results in sparse/delayed rewards—sparse rewards trap agents in blind exploration, and delayed rewards hinder the association between early actions and final outcomes, adversely affecting learning efficiency. To address those challenges, we propose Ouroboros—a framework integrating genetic algorithm-based symbolic rule reconstruction (translating WAF rules into interpretable RNNs for fine-grained confidence scoring), timing side-channel analysis (evaluating rule-matching depth), and a multi-tiered reward mechanism to enable self-evolving RL testing. Experiments show that the framework reaches 89.2% bypass success rate on signature-based WAFs. This paper presents an efficient solution for automated WAF testing and delivers insights for optimizing rule logic and anomaly detection mechanisms. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Figure 1

39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Viewed by 408
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

21 pages, 1176 KB  
Article
FedLTN-CubeSat: Neuro-Symbolic Federated Learning for Intrusion Detection in LEO CubeSat Constellations
by Gang Yang, Lin Ni, Junfeng Geng and Xiang Peng
Mathematics 2026, 14(6), 1047; https://doi.org/10.3390/math14061047 - 20 Mar 2026
Cited by 1 | Viewed by 266
Abstract
Low Earth Orbit (LEO) mega-constellations are becoming the backbone of global communications, yet their cybersecurity remains critically under-addressed. Intrusion detection systems (IDSs) for such constellations face a unique trilemma of accuracy, efficiency, and interpretability under extreme SWaP-C (size, weight, power, and cost) constraints. [...] Read more.
Low Earth Orbit (LEO) mega-constellations are becoming the backbone of global communications, yet their cybersecurity remains critically under-addressed. Intrusion detection systems (IDSs) for such constellations face a unique trilemma of accuracy, efficiency, and interpretability under extreme SWaP-C (size, weight, power, and cost) constraints. We present FedLTN-CubeSat (FedLTN refers to Federated Logic Tensor Networks), a neuro-symbolic federated learning framework for intrusion detection in LEO CubeSat constellations. The framework first employs a lightweight spatio-temporal separable perception encoder to efficiently extract features from telemetry and IQ data, designed to operate within the computational budgets of resource-constrained on-board processors. These features feed into a differentiable first-order logic layer based on Logic Tensor Networks, which incorporates domain knowledge as logical axioms to guide learning and enhance interpretability. To enable collaborative learning across a constellation, FedLTN-CubeSat introduces an intra-orbit symbolic federated learning mechanism that aggregates only the logic-layer parameters via inter-satellite links, drastically reducing communication overhead while preserving data privacy. Furthermore, an orbit-adaptive predicate migration module transfers learned rules across different orbital configurations with minimal supervision, facilitating rapid deployment. We evaluate on two benchmarks: the CuCD-ID dataset (NASA NOS3 telemetry) and the STIN dataset (satellite-terrestrial integrated networks). FedLTN-CubeSat achieves 0.98 F1-score on CuCD-ID and 0.96 accuracy on STIN—significantly outperforming prior federated learning baselines (7% improvement) while incurring a minimal daily communication load per satellite. The framework also outputs interpretable decision traces grounded in logical axioms, enabling operators to understand and validate detections. Logical constraints improve detection of unseen attack variants by 25% over pure neural baselines. Full article
(This article belongs to the Special Issue New Advances in Network Security and Data Privacy)
Show Figures

Figure 1

Back to TopTop