Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (184)

Search Parameters:
Keywords = packet drop

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 922 KB  
Article
MAESTRO: A Multi-Scale Ensemble Framework with GAN-Based Data Refinement for Robust Malicious Tor Traffic Detection
by Jinbu Geng, Yu Xie, Jun Li, Xuewen Yu and Lei He
Mathematics 2026, 14(3), 551; https://doi.org/10.3390/math14030551 - 3 Feb 2026
Abstract
Malicious Tor traffic data contains deep domain-specific knowledge, which makes labeling challenging, and the lack of labeled data degrades the accuracy of learning-based detectors. Real-world deployments also exhibit severe class imbalance, where malicious traffic constitutes a small minority of network flows, which further [...] Read more.
Malicious Tor traffic data contains deep domain-specific knowledge, which makes labeling challenging, and the lack of labeled data degrades the accuracy of learning-based detectors. Real-world deployments also exhibit severe class imbalance, where malicious traffic constitutes a small minority of network flows, which further reduces detection performance. In addition, Tor’s fixed 512-byte cell architecture removes packet-size diversity that many encrypted-traffic methods rely on, making feature extraction difficult. This paper proposes an efficient three-stage framework, MAESTRO v1.0, for malicious Tor traffic detection. In Stage 1, MAESTRO extracts multi-scale behavioral signatures by fusing temporal, positional, and directional embeddings at cell, direction, and flow granularities to mitigate feature homogeneity; it then compresses these representations with an autoencoder into compact latent features. In Stage 2, MAESTRO introduces an ensemble-based quality quantification method that combines five complementary anomaly detection models to produce robust discriminability scores for adaptive sample weighting, helping the classifier to emphasize high-quality samples. MAESTRO also trains three specialized GANs per minority class and applies strict five-model ensemble validation to synthesize diverse high-fidelity samples, addressing extreme class imbalance. We evaluate MAESTRO under systematic imbalance settings, ranging from the natural distribution to an extreme 1% malicious ratio. On the CCS’22 Tor malware dataset, MAESTRO achieves 92.38% accuracy, 64.79% recall, and 73.70% F1-score under the natural distribution, improving F1-score by up to 15.53% compared with state-of-the-art baselines. Under the 1% malicious setting, MAESTRO maintains 21.1% recall, which is 14.1 percentage points higher than the best baseline, while conventional methods drop below 10%. Full article
(This article belongs to the Special Issue New Advances in Network Security and Data Privacy)
Show Figures

Figure 1

19 pages, 414 KB  
Article
An Evolutionary Game Theory and Reinforcement Learning-Based Security Protocol for Intermittently Connected Wireless Networks
by Jagdeep Singh, Sanjay K. Dhurandher, Isaac Woungang and Petros Nicopolitidis
Telecom 2026, 7(1), 13; https://doi.org/10.3390/telecom7010013 - 1 Feb 2026
Viewed by 50
Abstract
Intermittently Connected Wireless Networks (ICWNs) are characterized by dynamic node mobility and the absence of persistent end-to-end paths, making them highly susceptible to security threats. This paper proposes a novel secure routing protocol, called the Evolutionary Game Theoretic model with Reinforcement Learning (EGT-RL), [...] Read more.
Intermittently Connected Wireless Networks (ICWNs) are characterized by dynamic node mobility and the absence of persistent end-to-end paths, making them highly susceptible to security threats. This paper proposes a novel secure routing protocol, called the Evolutionary Game Theoretic model with Reinforcement Learning (EGT-RL), designed to provide adaptive and resilient protection against blackhole attacks in such networks. EGT-RL integrates Q-learning for dynamic threat assessment with evolutionary game theory to model and influence node behavior over time. Simulation results, based on both synthetic and real-world mobility traces, show that EGT-RL significantly outperforms three benchmark protocols in delivery ratio, packet drops, end-to-end latency, and communication overhead. Full article
Show Figures

Figure 1

31 pages, 753 KB  
Article
Event-Triggered Robust Fusion Estimation for Multi-Sensor Systems Under Random Packet Drops
by Shaoxun Lu and Huabo Liu
Signals 2026, 7(1), 9; https://doi.org/10.3390/signals7010009 - 21 Jan 2026
Viewed by 107
Abstract
This paper focuses on the design of robust fusion estimators for multi-sensor systems experiencing constrained communications, model uncertainties, and random packet dropouts. To mitigate the impact of modeling errors, a sensitivity-penalized robust state estimator is employed at each local estimator. At the local [...] Read more.
This paper focuses on the design of robust fusion estimators for multi-sensor systems experiencing constrained communications, model uncertainties, and random packet dropouts. To mitigate the impact of modeling errors, a sensitivity-penalized robust state estimator is employed at each local estimator. At the local fusion estimators, a centralized robust fusion estimation algorithm is derived by improving the cost function of the sensitivity-penalized estimator. The implementation of an event-triggered strategy effectively alleviates the burden on the communication channels linking the sensors and the fusion center. Moreover, the fusion estimator is capable of handling packet drops caused by unreliable communication channels, and the pseudo cross-covariance matrix is accordingly formulated. Sufficient conditions are derived to ensure the uniform boundedness of the estimation error for the proposed robust fusion estimator. Finally, simulation experiments using a tractor-car system validate the performance and advantages of the presented algorithm. Full article
Show Figures

Figure 1

22 pages, 1021 KB  
Article
A Multiclass Machine Learning Framework for Detecting Routing Attacks in RPL-Based IoT Networks Using a Novel Simulation-Driven Dataset
by Niharika Panda and Supriya Muthuraman
Future Internet 2026, 18(1), 35; https://doi.org/10.3390/fi18010035 - 7 Jan 2026
Viewed by 342
Abstract
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and [...] Read more.
The use of resource-constrained Low-Power and Lossy Networks (LLNs), where the IPv6 Routing Protocol for LLNs (RPL) is the de facto routing standard, has increased due to the Internet of Things’ (IoT) explosive growth. Because of the dynamic nature of IoT deployments and the lack of in-protocol security, RPL is still quite susceptible to routing-layer attacks like Blackhole, Lowered Rank, version number manipulation, and Flooding despite its lightweight architecture. Lightweight, data-driven intrusion detection methods are necessary since traditional cryptographic countermeasures are frequently unfeasible for LLNs. However, the lack of RPL-specific control-plane semantics in current cybersecurity datasets restricts the use of machine learning (ML) for practical anomaly identification. In order to close this gap, this work models both static and mobile networks under benign and adversarial settings by creating a novel, large-scale multiclass RPL attack dataset using Contiki-NG’s Cooja simulator. To record detailed packet-level and control-plane activity including DODAG Information Object (DIO), DODAG Information Solicitation (DIS), and Destination Advertisement Object (DAO) message statistics along with forwarding and dropping patterns and objective-function fluctuations, a protocol-aware feature extraction pipeline is developed. This dataset is used to evaluate fifteen classifiers, including Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), k-Nearest Neighbors (KNN), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), AdaBoost (AB), and XGBoost (XGB) and several ensemble strategies like soft/hard voting, stacking, and bagging, as part of a comprehensive ML-based detection system. Numerous tests show that ensemble approaches offer better generalization and prediction performance. With overfitting gaps less than 0.006 and low cross-validation variance, the Soft Voting Classifier obtains the greatest accuracy of 99.47%, closely followed by XGBoost with 99.45% and Random Forest with 99.44%. Full article
Show Figures

Graphical abstract

20 pages, 1803 KB  
Article
Adaptive Localization-Free Secure Routing Protocol for Underwater Sensor Networks
by Ayman Alharbi and Saleh Ibrahim
Sensors 2026, 26(1), 17; https://doi.org/10.3390/s26010017 - 19 Dec 2025
Cited by 1 | Viewed by 357
Abstract
Depth-based probabilistic routing (DPR) is an efficient underwater acoustic network (UAN) routing protocol which resists the depth-spoofing attack. DPR’s optimal value of the unqualified forwarding probability depends on the UAN topology, condition, and threat state, which are highly dynamic. If the static forwarding [...] Read more.
Depth-based probabilistic routing (DPR) is an efficient underwater acoustic network (UAN) routing protocol which resists the depth-spoofing attack. DPR’s optimal value of the unqualified forwarding probability depends on the UAN topology, condition, and threat state, which are highly dynamic. If the static forwarding probability used in DPR is set too low for the current state, packet delivery ratio (PDR) drops. If it is set too high, unnecessary forwarding occurs when the network is not under attack, thus wasting valuable energy. In this paper, we propose a novel routing protocol, which uses a feedback mechanism that allows the sink to continuously adapt the unqualified forwarding probability according to the current network state. The protocol aims to achieve an application-controlled desired delivery ratio using one of three proposed update algorithms developed in this work. We analyze the performance of the proposed algorithms through simulation. Results demonstrate that the proposed adaptive routing protocol achieves resilience to depth-spoofing attacks by successfully delivering more than 80% of generated packets in more than 95% of simulated networks, while avoiding unnecessary unqualified forwarding in normal conditions. Full article
Show Figures

Figure 1

23 pages, 3582 KB  
Article
Compact Onboard Telemetry System for Real-Time Re-Entry Capsule Monitoring
by Nesrine Gaaliche, Christina Georgantopoulou, Ahmed M. Abdelrhman and Raouf Fathallah
Aerospace 2025, 12(12), 1105; https://doi.org/10.3390/aerospace12121105 - 14 Dec 2025
Viewed by 580
Abstract
This paper describes a compact low-cost telemetry system featuring ready-made sensors and an acquisition unit based on the ESP32, which makes use of the LoRa/Wi-Fi wireless standard for communication, and autonomous fallback logging to guarantee data recovery during communication loss. Ensuring safe atmospheric [...] Read more.
This paper describes a compact low-cost telemetry system featuring ready-made sensors and an acquisition unit based on the ESP32, which makes use of the LoRa/Wi-Fi wireless standard for communication, and autonomous fallback logging to guarantee data recovery during communication loss. Ensuring safe atmospheric re-entry requires reliable onboard monitoring of capsule conditions during descent. The system is intended for sub-orbital, low-cost educational capsules and experimental atmospheric descent missions rather than full orbital re-entry at hypersonic speeds, where the environmental loads and communication constraints differ significantly. The novelty of this work is the development of a fully self-contained telemetry system that ensures continuous monitoring and fallback logging without external infrastructure, bridging the gap in compact solutions for CubeSat-scale capsules. In contrast to existing approaches built around UAVs or radar, the proposed design is entirely self-contained, lightweight, and tailored to CubeSat-class and academic missions, where costs and infrastructure are limited. Ground test validation consisted of vertical drop tests, wind tunnel runs, and hardware-in-the-loop simulations. In addition, high-temperature thermal cycling tests were performed to assess system reliability under rapid temperature transitions between −20 °C and +110 °C, confirming stable operation and data integrity under thermal stress. Results showed over 95% real-time packet success with full data recovery in blackout events, while acceleration profiling confirmed resilience to peak decelerations of ~9 g. To complement telemetry, the TeleCapsNet dataset was introduced, facilitating a CNN recognition of descent states via 87% mean Average Precision, and an F1-score of 0.82, which attests to feasibility under constrained computational power. The novelty of this work is twofold: having reliable dual-path telemetry in real-time with full post-mission recovery and producing a scalable platform that explicitly addresses the lack of compact, infrastructure-independent proposals found in the existing literature. Results show an independent and cost-effective system for small re-entry capsule experimenters with reliable data integrity (without external infrastructure). Future work will explore AI systems deployment as a means to prolong the onboard autonomy, as well as to broaden the applicability of the presented approach into academic and low-resource re- entry investigations. Full article
Show Figures

Figure 1

19 pages, 4023 KB  
Article
RL-Based Resource Allocation in SDN-Enabled 6G Networks
by Ivan Radosavljević, Petar D. Bojović and Živko Bojović
Future Internet 2025, 17(11), 497; https://doi.org/10.3390/fi17110497 - 29 Oct 2025
Cited by 4 | Viewed by 1188
Abstract
Dynamic and efficient resource allocation is critical for Software-Defined Networking (SDN) enabled sixth-generation (6G) networks to ensure adaptability and optimized utilization of network resources. This paper proposes a reinforcement learning (RL)-based framework that integrates an actor–critic model with a modular SDN interface for [...] Read more.
Dynamic and efficient resource allocation is critical for Software-Defined Networking (SDN) enabled sixth-generation (6G) networks to ensure adaptability and optimized utilization of network resources. This paper proposes a reinforcement learning (RL)-based framework that integrates an actor–critic model with a modular SDN interface for fine-grained, queue-level bandwidth scheduling. The framework further incorporates a stochastic traffic generator for training and a virtualized multi-slice platform testbed for a realistic beyond-5G/6G evaluation. Experimental results show that the proposed RL model significantly outperforms a baseline forecasting model: it converges faster, showing notable improvements after 240 training epochs, achieves higher cumulative rewards, and reduces packet drops under dynamic traffic conditions. Moreover, the RL-based scheduling mechanism exhibits improved adaptability to traffic fluctuations, although both approaches face challenges under node outage conditions. These findings confirm that queue-level reinforcement learning enhances responsiveness and reliability in 6G networks, while also highlighting open challenges in fault-tolerant scheduling. Full article
Show Figures

Graphical abstract

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Cited by 1 | Viewed by 1563
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

27 pages, 7440 KB  
Article
Buffer with Dropping Function and Correlated Packet Lengths
by Andrzej Chydzinski and Blazej Adamczyk
Appl. Syst. Innov. 2025, 8(5), 135; https://doi.org/10.3390/asi8050135 - 19 Sep 2025
Cited by 1 | Viewed by 859
Abstract
We analyze a model of the packet buffer in which a new packet can be discarded with a probability connected to the buffer occupancy through an arbitrary dropping function. Crucially, it is assumed that packet lengths can be correlated in any way and [...] Read more.
We analyze a model of the packet buffer in which a new packet can be discarded with a probability connected to the buffer occupancy through an arbitrary dropping function. Crucially, it is assumed that packet lengths can be correlated in any way and that the interarrival time has a general distribution. From an engineering perspective, such a model constitutes a generalization of many active buffer management algorithms proposed for Internet routers. From a theoretical perspective, it generalizes a class of finite-buffer models with the tail-drop discarding policy. The contributions include formulae for the distribution of buffer occupancy and the average buffer occupancy, at arbitrary times and also in steady state. The formulae are illustrated with numerical calculations performed for various dropping functions. The formulae are also validated via discrete-event simulations. Full article
(This article belongs to the Section Applied Mathematics)
Show Figures

Figure 1

18 pages, 456 KB  
Article
Machine Learning-Powered IDS for Gray Hole Attack Detection in VANETs
by Juan Antonio Arízaga-Silva, Alejandro Medina Santiago, Mario Espinosa-Tlaxcaltecatl and Carlos Muñiz-Montero
World Electr. Veh. J. 2025, 16(9), 526; https://doi.org/10.3390/wevj16090526 - 18 Sep 2025
Cited by 2 | Viewed by 1047
Abstract
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these [...] Read more.
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these attacks, necessitating advanced solutions. This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Methods: This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Features were extracted from network traffic simulations on NS-3 and categorized into time-, packet-, and protocol-based attributes, where NS-3 is defined as a discrete event network simulator widely used in communication protocol research. Multiple classifiers, including Random Forest, Support Vector Machine (SVM), Logistic Regression, and Naive Bayes, were evaluated using precision, recall, and F1-score metrics. The Random Forest classifier outperformed others, achieving an F1-score of 0.9927 with 15 estimators and a depth of 15. In contrast, SVM variants exhibited limitations due to overfitting, with precision and recall below 0.76. Feature analysis highlighted transmission rate and packet/byte counts as the most influential for detection. The Random Forest-based IDS effectively identifies Gray Hole attacks, offering high accuracy and robustness. This approach addresses a critical gap in VANET security, enhancing resilience against sophisticated threats. Future work could explore hybrid models or real-world deployment to further validate the system’s efficacy. Full article
Show Figures

Figure 1

23 pages, 3843 KB  
Article
Leveraging Reconfigurable Massive MIMO Antenna Arrays for Enhanced Wireless Connectivity in Biomedical IoT Applications
by Sunday Enahoro, Sunday Cookey Ekpo, Yasir Al-Yasir and Mfonobong Uko
Sensors 2025, 25(18), 5709; https://doi.org/10.3390/s25185709 - 12 Sep 2025
Viewed by 1343
Abstract
The increasing demand for real-time, energy-efficient, and interference-resilient communication in smart healthcare environments has intensified interest in Biomedical Internet of Things (Bio-IoT) systems. However, ensuring reliable wireless connectivity for wearable and implantable biomedical sensors remains a challenge due to mobility, latency sensitivity, power [...] Read more.
The increasing demand for real-time, energy-efficient, and interference-resilient communication in smart healthcare environments has intensified interest in Biomedical Internet of Things (Bio-IoT) systems. However, ensuring reliable wireless connectivity for wearable and implantable biomedical sensors remains a challenge due to mobility, latency sensitivity, power constraints, and multi-user interference. This paper addresses these issues by proposing a reconfigurable massive multiple-input multiple-output (MIMO) antenna architecture, incorporating hybrid analog–digital beamforming and adaptive signal processing. The methodology combines conventional algorithms—such as Least Mean Square (LMS), Zero-Forcing (ZF), and Minimum Variance Distortionless Response (MVDR)—with a novel mobility-aware beamforming scheme. System-level simulations under realistic channel models (Rayleigh, Rician, 3GPP UMa) evaluate signal-to-interference-plus-noise ratio (SINR), bit error rate (BER), energy efficiency, outage probability, and fairness index across varying user loads and mobility scenarios. Results show that the proposed hybrid beamforming system consistently outperforms benchmarks, achieving up to 35% higher throughput, a 65% reduction in packet drop rate, and sub-10 ms latency even under high-mobility conditions. Beam pattern analysis confirms robust nulling of interference and dynamic lobe steering. This architecture is well-suited for next-generation Bio-IoT deployments in smart hospitals, enabling secure, adaptive, and power-aware connectivity for critical healthcare monitoring applications. Full article
(This article belongs to the Special Issue Challenges and Future Trends in Antenna Technology)
Show Figures

Figure 1

19 pages, 2392 KB  
Article
Intelligent Resource Allocation for Immersive VoD Multimedia in NG-EPON and B5G Converged Access Networks
by Razat Kharga, AliAkbar Nikoukar and I-Shyan Hwang
Photonics 2025, 12(6), 528; https://doi.org/10.3390/photonics12060528 - 22 May 2025
Viewed by 979
Abstract
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD [...] Read more.
Immersive content streaming services are becoming increasingly popular on video on demand (VoD) platforms due to the growing interest in extended reality (XR) and spatial experiences. Unlike traditional VoD, immersive VoD (IVoD) offers more engaging and interactive content beyond conventional 2D video. IVoD requires substantial bandwidth and minimal latency to deliver its interactive XR experiences. This research examines intelligent resource allocation for IVoD services across NG-EPON and B5G X-haul converged networks. A proposed software-defined networking (SDN) framework employs artificial neural networks (ANN) with a backpropagation technique to predict bandwidth control based on traffic patterns and network conditions. The new immersive video storage, field-programmable gate array (FPGA), Queue Manager, and logical layer components are added to the existing OLT and ONU hardware architecture to implement the SDN framework. The SDN framework manages the entire network, predicts bandwidth requirements, and operates the immersive media dynamic bandwidth allocation (IMS-DBA) algorithm to efficiently allocate bandwidth to IVoD network traffic, ensuring that QoS metrics are met for IM services. Simulation results demonstrate that the proposed framework significantly enhances mean packet delay by up to 3% and improves packet drop probability by up to 4% as the traffic load varies from light to high across different scenarios, leading to enhanced overall QoS performance. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

32 pages, 7616 KB  
Article
ANCHOR-Grid: Authenticating Smart Grid Digital Twins Using Real-World Anchors
by Mohsen Hatami, Qian Qu, Yu Chen, Javad Mohammadi, Erik Blasch and Erika Ardiles-Cruz
Sensors 2025, 25(10), 2969; https://doi.org/10.3390/s25102969 - 8 May 2025
Cited by 5 | Viewed by 2056
Abstract
Integrating digital twins (DTs) into smart grid systems within the Internet of Smart Grid Things (IoSGT) ecosystem brings novel opportunities but also security challenges. Specifically, advanced machine learning (ML)-based Deepfake technologies enable adversaries to create highly realistic yet fraudulent DTs, threatening critical infrastructures’ [...] Read more.
Integrating digital twins (DTs) into smart grid systems within the Internet of Smart Grid Things (IoSGT) ecosystem brings novel opportunities but also security challenges. Specifically, advanced machine learning (ML)-based Deepfake technologies enable adversaries to create highly realistic yet fraudulent DTs, threatening critical infrastructures’ reliability, safety, and integrity. In this paper, we introduce Authenticating Networked Computerized Handling of Representations for Smart Grid security (ANCHOR-Grid), an innovative authentication framework that leverages Electric Network Frequency (ENF) signals as real-world anchors to secure smart grid DTs at the frontier against Deepfake attacks. By capturing distinctive ENF variations from physical grid components and embedding these environmental fingerprints into their digital counterparts, ANCHOR-Grid provides a robust mechanism to ensure the authenticity and trustworthiness of virtual representations. We conducted comprehensive simulations and experiments within a virtual smart grid environment to evaluate ANCHOR-Grid. We crafted both authentic and Deepfake DTs of grid components, with the latter attempting to mimic legitimate behavior but lacking correct ENF signatures. Our results show that ANCHOR-Grid effectively differentiates between authentic and fraudulent DTs, demonstrating its potential as a reliable security layer for smart grid systems operating in the IoSGT ecosystem. In our virtual smart grid simulations, ANCHOR-Grid achieved a detection rate of 99.8% with only 0.2% false positives for Deepfake DTs at a sparse attack rate (1 forged packet per 500 legitimate packets). At a higher attack frequency (1 forged packet per 50 legitimate packets), it maintained a robust 97.5% detection rate with 1.5% false positives. Against replay attacks, it detected 94% of 5 s-old signatures and 98.5% of 120 s-old signatures. Even with 5% injected noise, detection remained at 96.5% (dropping to 88% at 20% noise), and under network latencies from <5 ms to 200 ms, accuracy ranged from 99.9% down to 95%. These results demonstrate ANCHOR-Grid’s high reliability and practical viability for securing smart grid DTs. These findings highlight the importance of integrating real-world environmental data into authentication processes for critical infrastructure and lay the foundation for future research on leveraging physical world cues to secure digital ecosystems. Full article
Show Figures

Figure 1

18 pages, 811 KB  
Article
RL-BMAC: An RL-Based MAC Protocol for Performance Optimization in Wireless Sensor Networks
by Owais Khan, Sana Ullah, Muzammil Khan and Han-Chieh Chao
Information 2025, 16(5), 369; https://doi.org/10.3390/info16050369 - 30 Apr 2025
Cited by 4 | Viewed by 1469
Abstract
Applications of wireless sensor networks have significantly increased in the modern era. These networks operate on a limited power supply in the form of batteries, which are normally difficult to replace on a frequent basis. In wireless sensor networks, sensor nodes alternate between [...] Read more.
Applications of wireless sensor networks have significantly increased in the modern era. These networks operate on a limited power supply in the form of batteries, which are normally difficult to replace on a frequent basis. In wireless sensor networks, sensor nodes alternate between sleep and active states to conserve energy through different methods. Duty cycling is among the most commonly used methods. However, it suffers from problems like unnecessary idle listening, extra energy consumption, and packet drop rate. A Deep Reinforcement Learning-based B-MAC protocol called (RL-BMAC) has been proposed to address this issue. The proposed protocol deploys a deep reinforcement learning agent with fixed hyperparameters to optimize the duty cycling of the nodes. The reinforcement learning agent monitors essential parameters such as energy level, packet drop rate, neighboring nodes’ status, and preamble sampling. The agent stores the information as a representative state and adjusts the duty cycling of all nodes. The performance of RL-BMAC is compared to that of conventional B-MAC through extensive simulations. The results obtained from the simulations indicate that RL-BMAC outperforms B-MAC in terms of throughput by 58.5%, packet drop rate by 44.8%, energy efficiency by 35%, and latency by 26.93% Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

33 pages, 9824 KB  
Article
An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming
by Mahalingam Anandaraj, Tahani Albalawi and Mohammad Alkhatib
J. Sens. Actuator Netw. 2025, 14(2), 38; https://doi.org/10.3390/jsan14020038 - 2 Apr 2025
Cited by 3 | Viewed by 2160
Abstract
This paper proposes a new approach to integrating Q learning into the fuzzy linear programming (FLP) paradigm to improve peer selection in P2P networks. Using Q learning, the proposed method employs real-time feedback to adjust and update peer selection policies. The FLP framework [...] Read more.
This paper proposes a new approach to integrating Q learning into the fuzzy linear programming (FLP) paradigm to improve peer selection in P2P networks. Using Q learning, the proposed method employs real-time feedback to adjust and update peer selection policies. The FLP framework enriches this process by dealing with imprecise information through fuzzy logic. It is used to achieve multiple objectives, such as enhancing the throughput rate, reducing the delay, and guaranteeing a reliable connection. This integration effectively solves the problem of network uncertainty, making the network configuration more stable and flexible. It is also important to note that throughout the use of the Q-learning agent in the network, various state metric indicators, including available bandwidth, latency, packet drop rates, and connectivity of nodes, are observed and recorded. It then selects actions by choosing optimal peers for each node and updating a Q table that defines states and actions based on these performance indices. This reward system guides the agent’s learning, refining its peer selection policy over time. The FLP framework supports the Q-learning agent by providing optimized solutions that balance conflicting objectives under uncertain conditions. Fuzzy parameters capture variability in network metrics, and the FLP model solves a fuzzy linear programming problem, offering guidelines for the Q-learning agent’s decisions. The proposed method is evaluated under different experimental settings to reveal its effectiveness. The Erdos–Renyi model simulation is used, and it shows that throughput increased by 21% and latency decreased by 40%. The computational efficiency was also notably improved, with computation times diminishing by up to five orders of magnitude compared to traditional methods. Full article
Show Figures

Figure 1

Back to TopTop