Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,401)

Search Parameters:
Keywords = wireless networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2956 KB  
Article
Fiber-Tethered UAV-Enabled Adaptive Aerial Optical Access Networks and Ground-to-Air-to-Ground Optical Bridging
by Ji-Yung Lee, Jae Seong Hwang, Gyeongcheol Shin, Byungju Lee, Kyungkoo Jun, Hyunbum Kim, Sujan Rajbhandari and Hyunchae Chun
Drones 2026, 10(4), 236; https://doi.org/10.3390/drones10040236 - 25 Mar 2026
Abstract
This work proposes a fiber-tethered UAV-enabled adaptive aerial passive optical network (AA-PON) framework for rapid extension of optical access and backhaul in hard-to-reach or temporarily disrupted environments. The proposed architecture supports two distinct operating modes: (i) an aerial base station (ABS) mode for [...] Read more.
This work proposes a fiber-tethered UAV-enabled adaptive aerial passive optical network (AA-PON) framework for rapid extension of optical access and backhaul in hard-to-reach or temporarily disrupted environments. The proposed architecture supports two distinct operating modes: (i) an aerial base station (ABS) mode for wide-area service extension and (ii) a ground-to-air-to-ground (G2A2G) mode for targeted high-speed optical bridging to ground terminal units. Unlike conventional UAV relay approaches, the proposed framework is developed as a network-level optical access/backhaul architecture based on tether-assisted aerial nodes and reconfigurable optical topology formation. In the ABS mode, representative Bus, Ring, and Star topologies are analyzed to evaluate serviceability, outage, deployment latency, and scalability as the number of UAV nodes increases. In the G2A2G mode, a stochastic-geometry-based analysis is used to characterize blockage-limited optical serviceability and infrastructure-density trade-offs. To complement the analytical study, a 2 Gb/s proof-of-concept FSO link between two fiber-tethered UAVs is demonstrated as an initial feasibility validation of the end-to-end optical link. The results show that the proposed AA-PON provides a flexible aerial optical networking framework that combines reconfigurable topology support with localized high-capacity optical access extension. Full article
Show Figures

Figure 1

29 pages, 707 KB  
Article
Symmetrical User Fairness in Asymmetric Indoor Channels: A Max–Min Framework for Joint Discrete RIS Partitioning and Power Allocation in NOMA Systems
by Periyakarupan Gurusamy Sivabalan Velmurugan, Vinoth Babu Kumaravelu, Arthi Murugadass, Agbotiname Lucky Imoize, Samarendra Nath Sur and Francisco R. Castillo Soria
Symmetry 2026, 18(4), 563; https://doi.org/10.3390/sym18040563 - 25 Mar 2026
Abstract
Reconfigurable intelligent surface (RIS)-assisted non-orthogonal multiple access (NOMA) has emerged as a promising technique to enhance spectral efficiency and coverage in fifth- and sixth-generation wireless networks. However, asymmetric indoor propagation conditions characterized by heterogeneous line-of-sight (LoS) and non-line-of-sight (NLoS) links often degrade user [...] Read more.
Reconfigurable intelligent surface (RIS)-assisted non-orthogonal multiple access (NOMA) has emerged as a promising technique to enhance spectral efficiency and coverage in fifth- and sixth-generation wireless networks. However, asymmetric indoor propagation conditions characterized by heterogeneous line-of-sight (LoS) and non-line-of-sight (NLoS) links often degrade user fairness. This paper investigates a downlink RIS-assisted NOMA system under the standardized 3GPP indoor office (InH) channel model to address fairness-oriented design under realistic link-budget constraints. We formulate an optimization problem for max–min fairness that jointly considers discrete RIS element partitioning and NOMA power allocation to achieve a symmetrical allocation of quality of service (QoS). To enable efficient computation, the non-convex problem is transformed into an epigraph form and solved using a low-complexity, bisection-based quasi-convex optimization framework combined with enumeration over RIS partitions. Numerical results demonstrate significant fairness gains; for instance, doubling the RIS array size yields a substantial improvement in the ergodic max–min rate, corresponding to approximately a 66% gain at moderate transmit power levels. Furthermore, by accounting for practical impairments such as imperfect successive interference cancellation (iSIC), imperfect channel state information (iCSI), and RIS implementation losses, the results reveal that fairness-optimal operation consistently prioritizes the far user to overcome severe indoor NLoS attenuation. The proposed framework is also compared with alternating optimization (AO)-based RIS-NOMA, conventional RIS beamforming without partition and RIS-assisted orthogonal multiple access (OMA) schemes. Simulation results confirm that the proposed framework achieves low computational complexity, making it suitable for practical indoor wireless environments. Full article
(This article belongs to the Special Issue Wireless Communications and Symmetries)
32 pages, 8447 KB  
Review
Advances and Opportunities in NIR-II Endoscopy: From Diagnosis to Therapeutic Applications
by Jing Luo, Xiaofan Du, Sijia Wang, Cuiping Yao and Jing Wang
Diagnostics 2026, 16(7), 986; https://doi.org/10.3390/diagnostics16070986 - 25 Mar 2026
Abstract
Endoscopy refers to the minimally invasive optical visualization and examination of internal structures within the body. Its significance lies in diagnosing intraluminal tissue abnormalities and assisting therapeutics, especially in the gastrointestinal tract. However, conventional optical endoscopes are limited by their insufficient penetration depth. [...] Read more.
Endoscopy refers to the minimally invasive optical visualization and examination of internal structures within the body. Its significance lies in diagnosing intraluminal tissue abnormalities and assisting therapeutics, especially in the gastrointestinal tract. However, conventional optical endoscopes are limited by their insufficient penetration depth. Although endoscopic ultrasound achieves deeper penetration of up to 10 cm, it suffers from compromised spatial resolution. Recent advances have expanded the role of endoscopy from basic mucosal visualization to precision diagnostics, therapeutic assistance, and even intelligent, remote-assisted procedures. An emerging modality, second near-infrared window (NIR-II, 1000–1700 nm) endoscopy, offers deep tissue penetration, reduced scattering, and a high signal-to-noise ratio. This review discusses the clinical requirements of endoscopy across screening, diagnostics and therapeutics. It provides a comparative assessment of current methodologies, and a particular focus is placed on discussing the promising developments in NIR-II endoscopy. Furthermore, we investigate the transformative potential of integrating artificial intelligence and fifth-generation wireless networks into endoscopic practice. The continued evolution and clinical translation of these technologies, particularly NIR-II endoscopy, hold the promise to fundamentally enhance precision medicine in gastroenterology. Full article
(This article belongs to the Special Issue Advances in Gastrointestinal Endoscopy: From Diagnosis to Therapy)
Show Figures

Figure 1

19 pages, 3061 KB  
Article
Enhanced Absorption Dominated Electromagnetic Interference Shielding Enabled by Carbon Nanotube and Graphene Reinforced Electrospun PVDF Nanocomposite
by Hisham Bamufleh, Usman Saeed, Abdulrahim Alzahrani, Aqeel Ahmad Taimoor, Sami-ullah Rather, Hesham Alhumade, Walid M. Alalayah and Hamad AlTuraif
Polymers 2026, 18(7), 789; https://doi.org/10.3390/polym18070789 - 25 Mar 2026
Abstract
The increasing density of wireless and wearable electronic devices necessitates the development of lightweight, flexible, and absorption-dominated electromagnetic interference (EMI) shielding materials. In this study, electrospun poly(vinylidene fluoride) (PVDF) composite mats reinforced with carbon nanotubes (CNTs) and graphene nanosheets at low filler loadings [...] Read more.
The increasing density of wireless and wearable electronic devices necessitates the development of lightweight, flexible, and absorption-dominated electromagnetic interference (EMI) shielding materials. In this study, electrospun poly(vinylidene fluoride) (PVDF) composite mats reinforced with carbon nanotubes (CNTs) and graphene nanosheets at low filler loadings (1–3 wt.%) were fabricated and systematically investigated for X-band (8.0–12.5 GHz) EMI shielding performance. Raman, FTIR, and thermal analyses confirm enhanced electroactive β-phase formation and improved thermal stability upon nanofiller incorporation. The formation of interconnected conductive networks within the electrospun fibrous architecture leads to a significant increase in electrical conductivity from 10−7 S·cm−1 for pure PVDF to 10−2 S·cm−1 and 10−1 S·cm−1 for CNT/PVDF and Graphene/PVDF composites, respectively, at 3 wt.% loading. Consequently, the total EMI shielding effectiveness (SET) increases from 2.5 dB for pure PVDF to 40 dB for CNT/PVDF and 42 dB for graphene/PVDF composites at 3 wt.%. The shielding effectiveness arising from absorption (SEA) dominates the overall EMI shielding performance, contributing more than 85% of the total shielding effectiveness (SET), which clearly indicates an absorption-controlled shielding mechanism. The combination of high absorption-dominated EMI shielding, low filler content, and mechanical flexibility highlights these electrospun CNT/PVDF and graphene/PVDF composites as promising candidates for next-generation flexible, wearable, and biomedical EMI shielding applications. Full article
Show Figures

Graphical abstract

16 pages, 2916 KB  
Article
Deep Learning-Based Relay Selection in a Decode-and-Forward Cooperative System with Energy Harvesting and Signal Space Diversity
by Ahmed Oun, Divyessh Maheshwari and Ahmed Ammar
Electronics 2026, 15(7), 1363; https://doi.org/10.3390/electronics15071363 - 25 Mar 2026
Abstract
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space [...] Read more.
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space diversity. The assumed relay decoding scheme is decode-and-forward (DF), with selection criteria based on successful decoding from the source, sufficient energy availability, and the best channel to the destination. The system performance is evaluated in terms of outage probability. Monte Carlo simulations are used to determine the exact outage probability of the system and to generate datasets for training machine learning models. The traditional machine learning models implemented include Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Support Vector Machines (SVMs). The deep learning-based method used is the deep neural network (DNN). Two datasets—one with six features and another with nine features—were used for training and testing. The 6-feature datasets are comparatively less random and complex than the 9-feature datasets. The results indicate that among traditional models KNN achieves the highest accuracy and is thus used as a benchmark to compare against DNN performance. For the 9-feature datasets, both KNN and DNN struggle to accurately approximate the exact outage probability, suggesting that the 9-feature datasets are too complex and noisy for effective modeling. However, on the 6-feature datasets, KNN achieves 77% accuracy, while DNN achieves a significantly higher accuracy of 99%. Due to its high accuracy, the DNN model closely approximates the exact outage probability while offering greater computational efficiency compared to the KNN model. These results underscore the potential of deep learning in optimizing real-time relay selection for energy-harvesting cooperative communication systems. Full article
(This article belongs to the Special Issue Advances in Networked Systems and Communication Protocols)
Show Figures

Figure 1

26 pages, 791 KB  
Article
A Kyber-Based Lightweight Cloud-Assisted Authentication Scheme for Medical IoT
by He Yan, Zhenyu Wang, Liuming Lin, Jing Sun and Shuanggen Liu
Sensors 2026, 26(7), 2021; https://doi.org/10.3390/s26072021 - 24 Mar 2026
Abstract
The Medical Internet of Things (MIoT) has promoted smart healthcare through the deep integration of wearable devices, wireless communication, and cloud services. However, this framework faces security risks, as attackers may exploit public channels to impersonate legitimate devices or services and steal sensitive [...] Read more.
The Medical Internet of Things (MIoT) has promoted smart healthcare through the deep integration of wearable devices, wireless communication, and cloud services. However, this framework faces security risks, as attackers may exploit public channels to impersonate legitimate devices or services and steal sensitive data. Therefore, establishing authentication between wearable devices and servers prior to data transmission is crucial. Existing schemes suffer from two critical drawbacks: vulnerability to quantum attacks and excessively high communication overhead, highlighting the need for improved solutions. The authors of this paper present a multi-factor identity authentication protocol to achieve post-quantum security and privacy protection. The scheme integrates lattice-based Kyber key encapsulation and a fuzzy commitment mechanism to secure biological templates and enable post-quantum key agreement. Additionally, hash functions and lightweight error correction codes are employed to reduce terminal communication overhead. The security of the scheme is rigorously proved in the Real-or-Random model, and the analysis confirms that the scheme satisfies common security requirements for wireless networks. The proposed scheme is also compared with existing schemes, and the results demonstrate that it achieves a balance between security and overhead. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in Internet of Things (IoT))
Show Figures

Figure 1

23 pages, 2120 KB  
Review
The Impact of Generative AI on 6G Network Architecture and Service
by Yedil Nurakhov, Serik Aibagarov, Nurislam Kassymbek, Aksultan Mukhanbet, Bolatzhan Kumalakov and Timur Imankulov
Electronics 2026, 15(7), 1345; https://doi.org/10.3390/electronics15071345 - 24 Mar 2026
Abstract
The transition from 5G to 6G wireless systems marks a paradigm shift from “connected things” to “connected intelligence,” driven by the necessity to manage hyper-heterogeneous networks and overcome the Shannon capacity limit. This Systematic Literature Review (SLR) analyzes 118 primary studies to evaluate [...] Read more.
The transition from 5G to 6G wireless systems marks a paradigm shift from “connected things” to “connected intelligence,” driven by the necessity to manage hyper-heterogeneous networks and overcome the Shannon capacity limit. This Systematic Literature Review (SLR) analyzes 118 primary studies to evaluate the transformative impact of Generative AI (GenAI) and Large Language Models (LLMs) on 6G architecture. We categorize the integration of GenAI into five semantic clusters: Architecture, Management, Security, Semantics, and Edge AI. The synthesis reveals that 6G is evolving toward an “AI-Native” ecosystem where LLMs show strong promise for augmenting network orchestration through Intent-Based Networking (IBN) and generative models demonstrate significant potential to augment or transcend traditional physical layer algorithms. Furthermore, the review identifies a fundamental transition from bit-oriented to semantic-oriented communication, utilizing GenAI to reconstruct meaning from minimal data. However, critical challenges remain, particularly the “energy–intelligence paradox” and the risks of model hallucinations in critical infrastructure. We conclude that while GenAI provides the necessary cognitive flexibility for 6G, its successful deployment depends on solving the “inference gap” through split learning and extreme model quantization at the edge. Full article
Show Figures

Figure 1

25 pages, 6266 KB  
Article
A Solution for Heritage Monitoring Based on Wireless Low-Cost Sensors and BIM: Application to the Monserrate Palace
by Rita Machete, Fábio M. Dias, Diogo M. Caetano, Ana Paula Falcão, Maria da Glória Gomes and Rita Bento
Sensors 2026, 26(7), 2015; https://doi.org/10.3390/s26072015 - 24 Mar 2026
Abstract
Conservation and management of built cultural heritage require multidisciplinary approaches and reliable information to support decision-making. In this context, digital transformation strategies that combine Building Information Modeling (BIM) with monitoring technologies offer significant potential to improve heritage management. This paper presents a monitoring [...] Read more.
Conservation and management of built cultural heritage require multidisciplinary approaches and reliable information to support decision-making. In this context, digital transformation strategies that combine Building Information Modeling (BIM) with monitoring technologies offer significant potential to improve heritage management. This paper presents a monitoring solution based on a wireless network of low-cost Internet of Things (IoT) sensors integrated within a Heritage Building Information Model (HBIM), applied to Monserrate Palace in Sintra, Portugal. The proposed approach covers all implementation stages, including HBIM development from as-built data collection, deployment of a wireless monitoring network for acceleration and environmental parameters, and integration of monitoring data into a BIM-based platform. The system aims to create a Digital Shadow of the building as a step towards a Digital Twin framework, enabling centralized visualization and management of structural and environmental information through the HBIM model and dedicated dashboards. Given the lower accuracy of low-cost sensors, in situ calibration with reference equipment was conducted to validate the recorded data. Implementing monitoring systems in heritage contexts presents challenges, such as limited historical documentation and the need for minimally invasive interventions. Despite these constraints, the proposed solution demonstrates the advantages of integrating monitoring data within HBIM, enabling centralized data management and improved understanding of building performance and conservation needs. Full article
Show Figures

Figure 1

19 pages, 1015 KB  
Article
Smart Energy Management in Agricultural Wireless Sensor Nodes Using TinyML-Based Adaptive Sampling
by Adrian Hinostroza, Jimmy Tarrillo and Moises Nuñez
Sensors 2026, 26(7), 2014; https://doi.org/10.3390/s26072014 - 24 Mar 2026
Abstract
Smart sensors are increasingly used in agriculture to monitor environmental conditions and support data-driven decision-making. However, traditional sensor implementations face critical challenges related to power consumption, especially in remote farms—such as pitaya plantations—where access to electricity and ongoing maintenance is limited. This paper [...] Read more.
Smart sensors are increasingly used in agriculture to monitor environmental conditions and support data-driven decision-making. However, traditional sensor implementations face critical challenges related to power consumption, especially in remote farms—such as pitaya plantations—where access to electricity and ongoing maintenance is limited. This paper presents a smart energy management system for agricultural sensor nodes integrating a machine learning model for adaptive sampling and a batching strategy to optimize energy usage. A lightweight Stochastic Gradient Descent (SGD) regressor trained on temperature dynamics runs on-device to predict the sampling interval (Ts). In parallel, the node adjusts the number of buffered samples as the battery state of charge (SOC) decreases, reducing Long Range (LoRa) transmissions. Field experiments show that the proposed approach reduces energy consumption by 77.8% compared with fixed-interval sampling, while maintaining good temperature fidelity with Mean Absolute Error (MAE) of 0.537 °C for temperature reconstruction. Full article
(This article belongs to the Special Issue Sensing and Machine Learning in Autonomous Agriculture)
Show Figures

Figure 1

38 pages, 4089 KB  
Article
A Mobility-Aware Zone-Based Key Management Scheme with Dynamic Key Refinement for Large-Scale Mobile Wireless Sensor Networks
by Abdelbassette Chenna, Djallel Eddine Boubiche, Abderrezak Benyahia, Homero Toral-Cruz, Rafael Martínez-Peláez and Pablo Velarde-Alvarado
Future Internet 2026, 18(3), 175; https://doi.org/10.3390/fi18030175 - 23 Mar 2026
Viewed by 58
Abstract
Mobile Wireless Sensor Networks (MWSNs) enhance traditional wireless sensor networks by allowing sensor nodes to move, resulting in continuously changing network topologies. Although this mobility enables advanced applications such as disaster response, intelligent transportation systems, and mission-critical monitoring, it poses major challenges for [...] Read more.
Mobile Wireless Sensor Networks (MWSNs) enhance traditional wireless sensor networks by allowing sensor nodes to move, resulting in continuously changing network topologies. Although this mobility enables advanced applications such as disaster response, intelligent transportation systems, and mission-critical monitoring, it poses major challenges for secure and scalable key management in large-scale deployments. Most existing key management and key pre-distribution schemes are tailored to static or lightly mobile networks and therefore suffer from limited scalability, excessive memory consumption, inefficient key utilization, and increased vulnerability to node capture when applied to highly mobile environments. This paper proposes a mobility-aware, zone-based key management scheme that integrates an enhanced composite key distribution mechanism with dynamic key refinement. The network is partitioned into logical zones, each maintaining an independent key pool to confine security breaches and improve scalability. To adapt to mobility-induced topology changes, sensor nodes continuously refine their key rings by preserving only the cryptographic keys associated with persistent neighbor relationships. This selective retention strategy significantly reduces storage overhead while strengthening resilience against key compromise and unauthorized access. Comprehensive analytical modeling and performance evaluations demonstrate that the proposed scheme achieves higher secure connectivity, stronger resistance to node capture attacks, and improved scalability compared to existing approaches, particularly in dense and highly mobile MWSN scenarios. Full article
Show Figures

Graphical abstract

19 pages, 2944 KB  
Article
LSTM-Based Early Jamming Threat Detection Scheme for Drone Ad-Hoc Networks
by Chungman Oh and Seokjoong Kang
Appl. Sci. 2026, 16(6), 3046; https://doi.org/10.3390/app16063046 - 21 Mar 2026
Viewed by 76
Abstract
Drone ad-hoc networks are inherently vulnerable to performance-degradation attacks such as jamming, packet disruption, and routing interference due to dynamic topology changes and unstable wireless channels. In such environments, conventional threshold-based detection schemes often fail to identify threats in their early stages because [...] Read more.
Drone ad-hoc networks are inherently vulnerable to performance-degradation attacks such as jamming, packet disruption, and routing interference due to dynamic topology changes and unstable wireless channels. In such environments, conventional threshold-based detection schemes often fail to identify threats in their early stages because individual performance metrics remain within normal ranges despite emerging abnormal temporal patterns. To address this limitation, this study proposes an LSTM-based early threat detection method that learns the temporal dynamics of network performance indicators, including packet delivery ratio (PDR), connection reliability (CR), and delay. By modeling inter-metric correlations and evolving degradation trends, the proposed approach enables probabilistic inference of abnormal state transitions prior to explicit threshold violations. The proposed method is validated through simulation experiments conducted in a drone ad-hoc network environment under jamming attack scenarios, and its performance is compared with that of conventional threshold-based schemes. The results show that while the threshold-based approach first detected the attack at t = 65 s when predefined metric boundaries were exceeded, the proposed LSTM-based detector identified the attack at t = 45 s with an estimated attack probability of 0.63, achieving approximately 20 s earlier detection. This improvement is attributed to the LSTM’s capability to capture subtle temporal dependencies, directional trends, and cross-metric interactions that precede abrupt metric degradation. Furthermore, the LSTM output probabilities exhibited monotonic growth during the attack period and gradual decay during recovery, indicating robust tracking of network state transitions rather than isolated event detection. These results demonstrate that the proposed method not only enhances early threat awareness but also contributes to resilience-oriented operation by enabling proactive mitigation in drone ad-hoc networks. This study provides quantitative evidence that sequence learning over performance metrics can overcome the structural limitations of threshold-based detection and enable effective early threat detection in drone ad-hoc network environments. Full article
Show Figures

Figure 1

23 pages, 1004 KB  
Article
A Lightweight IDS Based on Blockchain and Machine Learning for Detecting Physical Attacks in Wireless Sensor Networks
by Maytham S. Jabor, Aqeel S. Azez, José Carlos Campelo and Alberto Bonastre
Sensors 2026, 26(6), 1961; https://doi.org/10.3390/s26061961 - 20 Mar 2026
Viewed by 319
Abstract
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This [...] Read more.
Wireless sensor networks (WSNs) are vulnerable to physical attacks in which adversaries gain partial or full control of sensor nodes, compromising the integrity of the network. Conventional security mechanisms impose excessive computational overhead and are not well suited to resource-constrained WSN devices. This paper proposes a lightweight, two-layer intrusion detection system (IDS) that integrates blockchain (BC) technology with machine learning for physical attack detection in WSNs. The first layer employs a lightweight BC protocol among cluster heads (CHs) and the base station (BS) to detect data integrity violations through hash-based consensus. The second layer applies an artificial neural network (ANN) at the base station to detect attacks that bypass blockchain verification, without imposing any processing load on sensor nodes. Simulation experiments on a 100-node WSN demonstrate that the combined system achieves 97.42% accuracy and 98.35% recall, outperforming five established classifiers and both standalone components. The system sustains detection rates above 99.98% under 30 simultaneous attackers and maintains reliable operation under packet loss conditions up to 10%. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)
Show Figures

Figure 1

12 pages, 227 KB  
Review
The Dual Challenges for Radio Frequency Fingerprinting Trustworthiness: Feature Drift Modeling and the Privacy Imperative for Deployable Physical Layer Security
by Miranda Harizaj, Ali Kara and Iraklis Symeonidis
Electronics 2026, 15(6), 1309; https://doi.org/10.3390/electronics15061309 - 20 Mar 2026
Viewed by 141
Abstract
Radio Frequency Fingerprinting (RFF) would be a promising Physical Layer Security (PLS) solution for the Internet of Things (IoT) that requires robust, low-overhead security techniques. However, practical implementation of RFF may pose challenges, in particular, performance instability and ethical-regulatory conflicts. Based on authors’ [...] Read more.
Radio Frequency Fingerprinting (RFF) would be a promising Physical Layer Security (PLS) solution for the Internet of Things (IoT) that requires robust, low-overhead security techniques. However, practical implementation of RFF may pose challenges, in particular, performance instability and ethical-regulatory conflicts. Based on authors’ previous research, this paper elaborates these challenges in potential deployment of a resilient and compliant RFF system. First, we analytically show how hardware-induced feature drift, primarily driven by device aging and temperature variations, degrades RFF performance. We then critically survey existing temperature variation and aging models, one of which is being studied by one of the authors’ research team. We look into this from a purely hardware-design perspective, and then compensation methods for an RFF perspective. This reveals a significant gap: current techniques are insufficient to maintain the long-term, high-accuracy RFF for real-world IoT security requirements. Finally, we introduce inherent privacy risks by enabling device tracking. This property conflicts with General Data Protection Regulation (GDPR) mandates, raising significant regulatory challenges and privacy risks. Overall, this work highlights the key technical and legal challenges that must be addressed for RFF to evolve into a robust, privacy-compliant and deployable security primitive for IoT and future wireless systems. Full article
23 pages, 630 KB  
Article
Depth-First Search-Based Malicious Node Detection with Honeypot Technology in Wireless Sensor Networks
by Sercan Demirci, Doğan Yıldız, Durmuş Özkan Şahin and Asmaa Alaadin
Mathematics 2026, 14(6), 1050; https://doi.org/10.3390/math14061050 - 20 Mar 2026
Viewed by 134
Abstract
Wireless sensor networks (WSNs) are highly susceptible to Denial-of-Service (DoS) attacks due to their resource-constrained and distributed nature. In this study, we propose a novel trust-based malicious node detection mechanism that leverages a Depth-First Search (DFS) strategy to trace and identify attack sources [...] Read more.
Wireless sensor networks (WSNs) are highly susceptible to Denial-of-Service (DoS) attacks due to their resource-constrained and distributed nature. In this study, we propose a novel trust-based malicious node detection mechanism that leverages a Depth-First Search (DFS) strategy to trace and identify attack sources within clustered WSN architectures efficiently. The proposed approach dynamically evaluates trust scores between nodes to detect anomalous behaviors and employs a honeypot-based redirection system to isolate compromised nodes from the main communication flow. This combination enhances detection accuracy while minimizing false positives and energy overhead. The method is implemented and evaluated using a custom simulation environment. Comparative experimental results against state-of-the-art techniques such as the Evolved Trust Updating Mechanism (EVO) and Multi-agent Trust-based Intrusion Detection System (MULTI) demonstrate that our Trust-Based Honeypot (TBHP) achieves superior performance in terms of detection rate, false-alarm rate, and network lifetime extension. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

33 pages, 8800 KB  
Article
Energy-Efficient Wireless Sensor Networks Through Coverage Hole Detection and Mitigation Using a Hybrid Raccoon–Hermit Crab Optimization Algorithm
by Sean Laurel Rex Bashyam and Renuga Devi Subramanian
Future Internet 2026, 18(3), 163; https://doi.org/10.3390/fi18030163 - 19 Mar 2026
Viewed by 154
Abstract
Wireless sensor networks encounter issues like irregular deployment, node failures, and uneven energy consumption that create coverage holes, leading to a reduction in network lifetime in critical or disaster-based applications. Most existing approaches focus on coverage enhancement during the initial deployment and perform [...] Read more.
Wireless sensor networks encounter issues like irregular deployment, node failures, and uneven energy consumption that create coverage holes, leading to a reduction in network lifetime in critical or disaster-based applications. Most existing approaches focus on coverage enhancement during the initial deployment and perform mitigation only at the beginning of the network operation. However, the coverage holes may also occur later due to node failures and energy depletion. To address this issue, a Hybrid Raccoon–Hermit crab optimization algorithm that advocates both initial coverage enhancement and adaptive mitigation due to future coverage holes is proposed. The proposed algorithm uses the global exploration ability of the raccoon optimization algorithm to find optimal cluster heads and the exploitation ability of the Hermit crab optimization to determine the optimal position and to relocate the static nodes logically to mitigate coverage holes. The proposed algorithm is evaluated under different node densities (50, 100, 200, 500, and 1000), with the sink at (100,100). It results in an enhanced network lifetime of 65.20%, an improved coverage ratio (16.94%) from (77.05%) to (93.94%), increased throughput by delivering (3,139,293) bits, and a reduced delay of 2.27292 s for 1000 nodes compared with other existing methods. Full article
Show Figures

Graphical abstract

Back to TopTop