Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,974)

Search Parameters:
Keywords = network of wireless devices

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 2656 KB  
Article
Energy Footprint and Reliability of IoT Communication Protocols for Remote Sensor Networks
by Jerzy Krawiec, Martyna Wybraniak-Kujawa, Ilona Jacyna-Gołda, Piotr Kotylak, Aleksandra Panek, Robert Wojtachnik and Teresa Siedlecka-Wójcikowska
Sensors 2025, 25(19), 6042; https://doi.org/10.3390/s25196042 - 1 Oct 2025
Abstract
Excessive energy consumption of communication protocols in IoT/IIoT systems constitutes one of the key constraints for the operational longevity of remote sensor nodes, where radio transmission often incurs higher energy costs than data acquisition or local computation. Previous studies have remained fragmented, typically [...] Read more.
Excessive energy consumption of communication protocols in IoT/IIoT systems constitutes one of the key constraints for the operational longevity of remote sensor nodes, where radio transmission often incurs higher energy costs than data acquisition or local computation. Previous studies have remained fragmented, typically focusing on selected technologies or specific layers of the communication stack, which has hindered the development of comparable quantitative metrics across protocols. The aim of this study is to design and validate a unified evaluation framework enabling consistent assessment of both wired and wireless protocols in terms of energy efficiency, reliability, and maintenance costs. The proposed approach employs three complementary research methods: laboratory measurements on physical hardware, profiling of SBC devices, and simulations conducted in the COOJA/Powertrace environment. A Unified Comparative Method was developed, incorporating bilinear interpolation and weighted normalization, with its robustness confirmed by a Spearman rank correlation coefficient exceeding 0.9. The analysis demonstrates that MQTT-SN and CoAP (non-confirmable mode) exhibit the highest energy efficiency, whereas HTTP/3 and AMQP incur the greatest energy overhead. Results are consolidated in the ICoPEP matrix, which links protocol characteristics to four representative RS-IoT scenarios: unmanned aerial vehicles (UAVs), ocean buoys, meteorological stations, and urban sensor networks. The framework provides well-grounded engineering guidelines that may extend node lifetime by up to 35% through the adoption of lightweight protocol stacks and optimized sampling intervals. The principal contribution of this work is the development of a reproducible, technology-agnostic tool for comparative assessment of IoT/IIoT communication protocols. The proposed framework addresses a significant research gap in the literature and establishes a foundation for further research into the design of highly energy-efficient and reliable IoT/IIoT infrastructures, supporting scalable and long-term deployments in diverse application environments. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
31 pages, 12366 KB  
Article
Gateway-Free LoRa Mesh on ESP32: Design, Self-Healing Mechanisms, and Empirical Performance
by Danilo Arregui Almeida, Juan Chafla Altamirano, Milton Román Cañizares, Pablo Palacios Játiva, Javier Guaña-Moya and Iván Sánchez
Sensors 2025, 25(19), 6036; https://doi.org/10.3390/s25196036 - 1 Oct 2025
Abstract
LoRa is a long-range, low-power wireless communication technology widely used in Internet of Things (IoT) applications. However, its conventional implementation through Long Range Wide Area Network (LoRaWAN) presents operational constraints due to its centralized topology and reliance on gateways. To overcome these limitations, [...] Read more.
LoRa is a long-range, low-power wireless communication technology widely used in Internet of Things (IoT) applications. However, its conventional implementation through Long Range Wide Area Network (LoRaWAN) presents operational constraints due to its centralized topology and reliance on gateways. To overcome these limitations, this work designs and validates a gateway-free mesh communication system that operates directly on commercially available commodity microcontrollers, implementing lightweight self-healing mechanisms suitable for resource-constrained devices. The system, based on ESP32 microcontrollers and LoRa modulation, adopts a mesh topology with custom mechanisms including neighbor-based routing, hop-by-hop acknowledgments (ACKs), and controlled retransmissions. Reliability is achieved through hop-by-hop acknowledgments, listen-before-talk (LBT) channel access, and duplicate suppression using alternate link triggering (ALT). A modular prototype was developed and tested under three scenarios such as ideal conditions, intermediate node failure, and extended urban deployment. Results showed robust performance, achieving a Packet Delivery Ratio (PDR), the percentage of successfully delivered DATA packets over those sent, of up to 95% in controlled environments and 75% under urban conditions. In the failure scenario, an average Packet Recovery Ratio (PRR), the proportion of lost packets successfully recovered through retransmissions, of 88.33% was achieved, validating the system’s self-healing capabilities. Each scenario was executed in five independent runs, with values calculated for both traffic directions and averaged. These findings confirm that a compact and fault-tolerant LoRa mesh network, operating without gateways, can be effectively implemented on commodity ESP32-S3 + SX1262 hardware. Full article
22 pages, 1669 KB  
Article
Adaptive Multi-Objective Optimization for UAV-Assisted Wireless Powered IoT Networks
by Xu Zhu, Junyu He and Ming Zhao
Information 2025, 16(10), 849; https://doi.org/10.3390/info16100849 - 1 Oct 2025
Abstract
This paper studies joint data collection and wireless power transfer in a UAV-assisted IoT network. A rotary-wing UAV follows a fly–hover–communicate cycle. At each hover, it simultaneously receives uplink data in full-duplex mode while delivering radio-frequency energy to nearby devices. Using a realistic [...] Read more.
This paper studies joint data collection and wireless power transfer in a UAV-assisted IoT network. A rotary-wing UAV follows a fly–hover–communicate cycle. At each hover, it simultaneously receives uplink data in full-duplex mode while delivering radio-frequency energy to nearby devices. Using a realistic propulsion-power model and a nonlinear energy-harvesting model, we formulate trajectory and hover control as a multi-objective optimization problem that maximizes the aggregate data rate and total harvested energy while minimizing the UAV’s energy consumption over the mission. To enable flexible trade-offs among these objectives under time-varying conditions, we propose a dynamic, state-adaptive weighting mechanism that generates environment-conditioned weights online, which is integrated into an enhanced deep deterministic policy gradient (DDPG) framework. The resulting dynamic-weight MODDPG (DW-MODDPG) policy adaptively adjusts the UAV’s trajectory and hover strategy in response to real-time variations in data demand and energy status. Simulation results demonstrate that DW-MODDPG achieves superior overall performance and a more favorable balance among the three objectives. Compared with the fixed-weight baseline, our algorithm increases total harvested energy by up to 13.8% and the sum data rate by up to 5.4% while maintaining comparable or even lower UAV energy consumption. Full article
(This article belongs to the Section Internet of Things (IoT))
Show Figures

Figure 1

19 pages, 2098 KB  
Article
Radio Frequency Fingerprint-Identification Learning Method Based-On LMMSE Channel Estimation for Internet of Vehicles
by Lina Sheng, Yao Xu, Yan Li, Yang Yang and Nan Fu
Mathematics 2025, 13(19), 3124; https://doi.org/10.3390/math13193124 - 30 Sep 2025
Abstract
As a typical representative of complex networks, the Internet of Vehicles (IoV) is more vulnerable to malicious attacks due to the mobility and complex environment of devices, which requires a secure and efficient authentication mechanism. Radio frequency fingerprinting (RFF) presents a novel research [...] Read more.
As a typical representative of complex networks, the Internet of Vehicles (IoV) is more vulnerable to malicious attacks due to the mobility and complex environment of devices, which requires a secure and efficient authentication mechanism. Radio frequency fingerprinting (RFF) presents a novel research perspective for identity authentication within the IoV. However, as device fingerprint features are directly extracted from wireless signals, their stability is significantly affected by variations in the communication channel. Furthermore, the interplay between wireless channels and receiver noise can result in the distortion of the received signal, complicating the direct separation of the genuine features of the transmitted signals. To address these issues, this paper proposes a method for RFF extraction based on the physical sidelink broadcast channel (PSBCH). First, necessary preprocessing is performed on the signal. Subsequently, the wireless channel, which lacks genuine features, is estimated using linear minimum mean square error (LMMSE) techniques. Meanwhile, the previous statistical models of the channel and noise are incorporated into the analysis process to accurately capture the channel distortion caused by multipath effects and noise. Ultimately, the impact of the channel is mitigated through a channel-equalization operation to extract fingerprint features, and identification is carried out using a structurally optimized ShuffleNet V2 network. Based on a lightweight design, this network integrates an attention mechanism that enables the model to adaptively concentrate on the most distinguishable weak features in low signal-to-noise ratio (SNR) conditions, thereby enhancing the robustness of feature extraction. The experimental results show that in fixed and mobile scenarios with low SNR, the classification accuracy of the proposed method reaches 96.76% and 91.05%, respectively. Full article
(This article belongs to the Special Issue Machine Learning in Computational Complex Systems)
Show Figures

Figure 1

17 pages, 4563 KB  
Article
Improving Solar Energy-Harvesting Wireless Sensor Network (SEH-WSN) with Hybrid Li-Fi/Wi-Fi, Integrating Markov Model, Sleep Scheduling, and Smart Switching Algorithms
by Heba Allah Helmy, Ali M. El-Rifaie, Ahmed A. F. Youssef, Ayman Haggag, Hisham Hamad and Mostafa Eltokhy
Technologies 2025, 13(10), 437; https://doi.org/10.3390/technologies13100437 - 29 Sep 2025
Abstract
Wireless sensor networks (WSNs) are an advanced solution for data collection in Internet of Things (IoT) applications and remote and harsh environments. These networks rely on a collection of distributed sensors equipped with wireless communication capabilities to collect low-cost and small-scale data. WSNs [...] Read more.
Wireless sensor networks (WSNs) are an advanced solution for data collection in Internet of Things (IoT) applications and remote and harsh environments. These networks rely on a collection of distributed sensors equipped with wireless communication capabilities to collect low-cost and small-scale data. WSNs face numerous challenges, including network congestion, slow speeds, high energy consumption, and a short network lifetime due to their need for a constant and stable power supply. Therefore, improving the energy efficiency of sensor nodes through solar energy harvesting (SEH) would be the best option for charging batteries to avoid excessive energy consumption and battery replacement. In this context, modern wireless communication technologies, such as Wi-Fi and Li-Fi, emerge as promising solutions. Wi-Fi provides internet connectivity via radio frequencies (RF), making it suitable for use in open environments. Li-Fi, on the other hand, relies on data transmission via light, offering higher speeds and better energy efficiency, making it ideal for indoor applications requiring fast and reliable data transmission. This paper aims to integrate Wi-Fi and Li-Fi technologies into the SEH-WSN architecture to improve performance and efficiency when used in all applications. To achieve reliable, efficient, and high-speed bidirectional communication for multiple devices, the paper utilizes a Markov model, sleep scheduling, and smart switching algorithms to reduce power consumption, increase signal-to-noise ratio (SNR) and throughput, and reduce bit error rate (BER) and latency by controlling the technology and power supply used appropriately for the mode, sleep, and active states of nodes. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

20 pages, 2506 KB  
Article
Design of an RRAM-Based Joint Model for Embedded Cellular Smartphone Self-Charging Device
by Abhinav Vishwakarma, Anubhav Vishwakarma, Matej Komelj, Santosh Kumar Vishvakarma and Michael Hübner
Micromachines 2025, 16(10), 1101; https://doi.org/10.3390/mi16101101 - 28 Sep 2025
Abstract
With the development of embedded electronic devices, energy consumption has become a significant design issue in modern systems-on-a-chip. Conventional SRAMs cannot maintain data after powering turned off, limiting their use in applications such as battery-powered smartphone devices that require non-volatility and no leakage [...] Read more.
With the development of embedded electronic devices, energy consumption has become a significant design issue in modern systems-on-a-chip. Conventional SRAMs cannot maintain data after powering turned off, limiting their use in applications such as battery-powered smartphone devices that require non-volatility and no leakage current. RRAM devices are recently used extensively in applications such as self-charging wireless sensor networks and storage elements, owing to their intrinsic non-volatility and multi-bit capabilities, making them a potential candidate for mitigating the von Neumann bottleneck. We propose a new RRAM-based hybrid memristor model incorporated with a permanent magnet. The proposed design (1T2R) was simulated in Cadence Virtuoso with a 1.5 V power supply, and the finite-element approach was adopted to simulate magnetization. This model can retain the data after the power is off and provides fast power on/off transitions. It is possible to charge a smartphone battery without an external power source by utilizing a portable charger that uses magnetic induction to convert mechanical energy into electrical energy. In an embedded smartphone self-charging device this addresses eco-friendly concerns and lowers environmental effects. It would lead to the development of magnetic field-assisted embedded portable electronic devices and open the door to new types of energy harvesting for RRAM devices. Our proposed design and simulation results reveal that, under usual conditions, the magnet-based device provide a high voltage to charge a smartphone battery. Full article
(This article belongs to the Special Issue Self-Tuning and Self-Powered Energy Harvesting Devices)
Show Figures

Figure 1

22 pages, 4882 KB  
Article
82.5 GHz Photonic W-Band IM/DD PS-PAM4 Wireless Transmission over 300 m Based on Balanced and Lightweight DNN Equalizer Cascaded with Clustering Algorithm
by Jingtao Ge, Jie Zhang, Sicong Xu, Qihang Wang, Jingwen Lin, Sheng Hu, Xin Lu, Zhihang Ou, Siqi Wang, Tong Wang, Yichen Li, Yuan Ma, Jiali Chen, Tensheng Zhang and Wen Zhou
Sensors 2025, 25(19), 5986; https://doi.org/10.3390/s25195986 - 27 Sep 2025
Abstract
With the rise of 6G, the exponential growth of data traffic, the proliferation of emerging applications, and the ubiquity of smart devices, the demand for spectral resources is unprecedented. Terahertz communication (100 GHz–3 THz) plays a key role in alleviating spectrum scarcity through [...] Read more.
With the rise of 6G, the exponential growth of data traffic, the proliferation of emerging applications, and the ubiquity of smart devices, the demand for spectral resources is unprecedented. Terahertz communication (100 GHz–3 THz) plays a key role in alleviating spectrum scarcity through ultra-broadband transmission. In this study, terahertz optical carrier-based systems are employed, where fiber-optic components are used to generate the optical signals, and the signal is transmitted via direct detection in the receiver side, without relying on fiber-optic transmission. In these systems, deep learning-based equalization effectively compensates for nonlinear distortions, while probability shaping (PS) enhances system capacity under modulation constraints. However, the probability distribution of signals processed by PS varies with amplitude, making it challenging to extract useful information from the minority class, which in turn limits the effectiveness of nonlinear equalization. Furthermore, in IM-DD systems, optical multipath interference (MPI) noise introduces signal-dependent amplitude jitter after direct detection, degrading system performance. To address these challenges, we propose a lightweight neural network equalizer assisted by the Synthetic Minority Oversampling Technique (SMOTE) and a clustering method. Applying SMOTE prior to the equalizer mitigates training difficulties arising from class imbalance, while the low-complexity clustering algorithm after the equalizer identifies edge jitter levels for decision-making. This joint approach compensates for both nonlinear distortion and jitter-related decision errors. Based on this algorithm, we conducted a 3.75 Gbaud W-band PAM4 wireless transmission experiment over 300 m at Fudan University’s Handan campus, achieving a bit error rate of 1.32 × 10−3, which corresponds to a 70.7% improvement over conventional schemes. Compared to traditional equalizers, the proposed new equalizer reduces algorithm complexity by 70.6% and training sequence length by 33%, while achieving the same performance. These advantages highlight its significant potential for future optical carrier-based wireless communication systems. Full article
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)
Show Figures

Figure 1

34 pages, 11521 KB  
Article
Explainable AI-Driven 1D-CNN with Efficient Wireless Communication System Integration for Multimodal Diabetes Prediction
by Radwa Ahmed Osman
AI 2025, 6(10), 243; https://doi.org/10.3390/ai6100243 - 25 Sep 2025
Abstract
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes [...] Read more.
The early detection of diabetes risk and effective management of patient data are critical for avoiding serious consequences and improving treatment success. This research describes a two-part architecture that combines an energy-efficient wireless communication technology with an interpretable deep learning model for diabetes categorization. In Phase 1, a unique wireless communication model is created to assure the accurate transfer of real-time patient data from wearable devices to medical centers. Using Lagrange optimization, the model identifies the best transmission distance and power needs, lowering energy usage while preserving communication dependability. This contribution is especially essential since effective data transport is a necessary condition for continuous monitoring in large-scale healthcare systems. In Phase 2, the transmitted multimodal clinical, genetic, and lifestyle data are evaluated using a one-dimensional Convolutional Neural Network (1D-CNN) with Bayesian hyperparameter tuning. The model beat traditional deep learning architectures like LSTM and GRU. To improve interpretability and clinical acceptance, SHAP and LIME were used to find global and patient-specific predictors. This approach tackles technological and medicinal difficulties by integrating energy-efficient wireless communication with interpretable predictive modeling. The system ensures dependable data transfer, strong predictive performance, and transparent decision support, boosting trust in AI-assisted healthcare and enabling individualized diabetes control. Full article
Show Figures

Figure 1

14 pages, 1486 KB  
Article
Optically Controlled Bias-Free Frequency Reconfigurable Antenna
by Karam Mudhafar Younus, Khalil Sayidmarie, Kamel Sultan and Amin Abbosh
Sensors 2025, 25(19), 5951; https://doi.org/10.3390/s25195951 - 24 Sep 2025
Viewed by 75
Abstract
A bias-free antenna tuning technique that eliminates conventional DC biasing networks is presented. The tuning mechanism is based on a Light-Dependent Resistor (LDR) embedded within the antenna structure. Optical illumination is used to modulate the LDR’s resistance, thereby altering the antenna’s effective electrical [...] Read more.
A bias-free antenna tuning technique that eliminates conventional DC biasing networks is presented. The tuning mechanism is based on a Light-Dependent Resistor (LDR) embedded within the antenna structure. Optical illumination is used to modulate the LDR’s resistance, thereby altering the antenna’s effective electrical length and enabling tuning of its resonant frequency and operating bands. By removing the need for bias lines, RF chokes, blocking capacitors, and control circuitry, the proposed approach minimizes parasitic effects, losses, biasing energy, and routing complexity. This makes it particularly suitable for compact and energy-constrained platforms, such as Internet of Things (IoT) devices. As proof of concept, an LDR is integrated into a ring monopole antenna, achieving tri-band operation in both high and low resistance states. In the high-resistance (OFF) state, the fabricated prototype operates across 2.1–3.1 GHz, 3.5–4 GHz, and 5–7 GHz. In the low-resistance (ON) state, the LDR bridges the two arcs of the monopole, extending the current path and shifting the lowest band to 1.36–2.35 GHz, with only minor changes to the mid and upper bands. The antenna maintains linear polarization across all bands and switching states, with measured gains reaching up to 5.3 dBi. Owing to its compact, bias-free, and low-cost architecture, the proposed design is well-suited for integration into portable wireless devices, low-power IoT nodes, and rapidly deployable communications systems where electrical biasing is impractical. Full article
(This article belongs to the Special Issue Microwave Components in Sensing Design and Signal Processing)
Show Figures

Figure 1

23 pages, 2022 KB  
Article
Implementation and Performance Evaluation of Quantum-Inspired Clustering Scheme for Energy-Efficient WSNs
by Chindiyababy Uthayakumar, Ramkumar Jayaraman, Hadi A. Raja and Kamran Daniel
Sensors 2025, 25(18), 5872; https://doi.org/10.3390/s25185872 - 19 Sep 2025
Viewed by 203
Abstract
Advancements in communication technologies and the proliferation of smart devices have significantly increased the demand for wireless sensor networks (WSNs). These networks play an important role in the IoT environment. The wireless sensor network has many sensor nodes that are used to monitor [...] Read more.
Advancements in communication technologies and the proliferation of smart devices have significantly increased the demand for wireless sensor networks (WSNs). These networks play an important role in the IoT environment. The wireless sensor network has many sensor nodes that are used to monitor the surrounding environment. Energy consumption is the main issue in WSN due to the difficulty in recharging or replacing batteries in the sensor nodes. Cluster head selection is one of the most effective approaches to reduce overall network energy consumption. In recent years, quantum technology has become a growing research area. Various quantum-based algorithms have been developed by researchers for clustering. This article introduces a novel, energy-efficient clustering scheme called the quantum-inspired clustering scheme (QICS), which is based on the Quantum Grover algorithm. It is mainly used to improve the performance of cluster head selection in a wireless sensor network. The research conducted simulations that compared the proposed cluster selection method against established algorithms, LEACH, GSACP, and EDS-KHO. The simulation environment used 100 nodes connected via specific energy and communication settings. QICS stands out as the superior clustering method since it extends the lifetime of the network by 30.5%, decreases energy usage by 22.4%, and increases the packet delivery ratios by 19.8%. The quantum method achieved an increase in speed with its clustering procedure. This study proves how quantum-inspired techniques have become an emerging approach to handling WSN energy restrictions, thus indicating future potential for IoT systems with energy awareness and scalability. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

18 pages, 1018 KB  
Article
An Event-Based Time-Incremented SNN Architecture Supporting Energy-Efficient Device Classification
by David L. Weathers, Michael A. Temple and Brett J. Borghetti
Electronics 2025, 14(18), 3712; https://doi.org/10.3390/electronics14183712 - 19 Sep 2025
Viewed by 273
Abstract
Recent advances in Radio Frequency (RF)-based device classification have shown promise in enabling secure and efficient wireless communications. However, the energy efficiency and low-latency processing capabilities of neuromorphic computing have yet to be fully leveraged in this domain. This paper is a first [...] Read more.
Recent advances in Radio Frequency (RF)-based device classification have shown promise in enabling secure and efficient wireless communications. However, the energy efficiency and low-latency processing capabilities of neuromorphic computing have yet to be fully leveraged in this domain. This paper is a first step toward enabling an end-to-end neuromorphic system for RF device classification, specifically supporting development of a neuromorphic classifier that enforces temporal causality without requiring non-neuromorphic classifier pre-training. This Spiking Neural Network (SNN) classifier streamlines the development of an end-to-end neuromorphic device classification system, further expanding the energy efficiency gains of neuromorphic processing to the realm of RF fingerprinting. Using experimentally collected WirelessHART transmissions, the TI-SNN achieves classification accuracy above 90% while reducing fingerprint density by nearly seven-fold and spike activity by over an order of magnitude compared to a baseline Rate-Encoded SNN (RE-SNN). These reductions translate to significant potential energy savings while maintaining competitive accuracy relative to Random Forest and CNN baselines. The results position the TI-SNN as a step toward a fully neuromorphic “RF Event Radio” capable of low-latency, energy-efficient device discrimination at the edge. Full article
(This article belongs to the Special Issue Advances in 5G and Beyond Mobile Communication)
Show Figures

Figure 1

21 pages, 912 KB  
Article
UAV-Enabled Maritime IoT D2D Task Offloading: A Potential Game-Accelerated Framework
by Baiyi Li, Jian Zhao and Tingting Yang
Sensors 2025, 25(18), 5820; https://doi.org/10.3390/s25185820 - 18 Sep 2025
Viewed by 226
Abstract
Maritime Internet of Things (IoT) with unmanned surface vessels (USVs) faces tight onboard computing and sparse wireless links. Compute-intensive vision and sensing workloads often exceed latency budgets, which undermines timely decisions. In this paper, we propose a novel distributed computation offloading framework for [...] Read more.
Maritime Internet of Things (IoT) with unmanned surface vessels (USVs) faces tight onboard computing and sparse wireless links. Compute-intensive vision and sensing workloads often exceed latency budgets, which undermines timely decisions. In this paper, we propose a novel distributed computation offloading framework for maritime IoT scenarios. By leveraging the limited computational resources of USVs within a device-to-device (D2D)-assisted edge network and the mobility advantages of UAV-assisted edge computing, we design a breadth-first search (BFS)-based distributed computation offloading game. Building upon this, we formulate a global latency minimization problem that jointly optimizes UAV hovering coordinates and arrival times. This problem is solved by decomposing it into subproblems addressed via a joint Alternating Direction Method of Multipliers (ADMM) and Successive Convex Approximation (SCA) approach, effectively reducing the time between UAV arrivals and hovering coordinates. Extensive simulations verify the effectiveness of our framework, demonstrating up to a 49.6% latency reduction compared with traditional offloading schemes. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

23 pages, 10504 KB  
Article
Indoor Localization with Extended Trajectory Map Construction and Attention Mechanisms in 5G
by Kexin Yang, Chao Yu, Saibin Yao, Zhenwei Jiang and Kun Zhao
Sensors 2025, 25(18), 5784; https://doi.org/10.3390/s25185784 - 17 Sep 2025
Viewed by 263
Abstract
Integrated sensing and communication (ISAC) is considered a key enabler for the future Internet of Things (IoT), as it enables wireless networks to simultaneously support high-capacity data transmission and precise environmental sensing. Indoor localization, as a representative sensing service in ISAC, has attracted [...] Read more.
Integrated sensing and communication (ISAC) is considered a key enabler for the future Internet of Things (IoT), as it enables wireless networks to simultaneously support high-capacity data transmission and precise environmental sensing. Indoor localization, as a representative sensing service in ISAC, has attracted considerable research attention. Nevertheless, its performance is largely constrained by the quality and granularity of the collected data. In this work, we propose an attention-based framework for cost-efficient indoor fingerprint localization that exploits extended trajectory map construction through a novel trajectory-based data augmentation (TDA) method. In particular, fingerprints at unmeasured locations are synthesized using a conditional Wasserstein generative adversarial network (CWGAN). A path generation algorithm is employed to produce diverse trajectories and construct the extended trajectory map. Based on this map, a multi-head attention model with direction-constrained auxiliary loss is then applied for accurate mobile device localization. Experiments in a real 5G indoor environment demonstrate the system’s effectiveness, achieving an average localization error of 1.09 m and at least 34% higher accuracy than existing approaches. Full article
Show Figures

Figure 1

24 pages, 6726 KB  
Article
Wearable K Band Sensors for Telemonitoring and Telehealth and Telemedicine Systems
by Albert Sabban
Sensors 2025, 25(18), 5707; https://doi.org/10.3390/s25185707 - 12 Sep 2025
Viewed by 305
Abstract
Novel K band wearable sensors and antennas for Telemonitoring, Telehealth and Telemedicine Systems, Internet of Things (IoT) systems, and communication sensors are discussed in this paper. Only in a limited number of papers are K band sensors presented. One of the major goals [...] Read more.
Novel K band wearable sensors and antennas for Telemonitoring, Telehealth and Telemedicine Systems, Internet of Things (IoT) systems, and communication sensors are discussed in this paper. Only in a limited number of papers are K band sensors presented. One of the major goals in the evaluation of Telehealth and Telemedicine and wireless communication devices is the development of efficient compact low-cost antennas and sensors. The development of wideband efficient antennas is crucial to the evaluation of wideband and multiband efficient Telemonitoring, Telehealth and Telemedicine wearable devices. The advantage of the printed wearable antenna is that the feed and matching network can be etched on the same substrate as the printed radiating antenna. K band slot antennas and arrays are presented in this paper the sensors are compact, lightweight, efficient, and wideband. The antennas’ design parameters, and comparison between computation and measured electrical performance of the antennas, are presented in this paper. Fractal efficient antennas and sensors were evaluated to maximize the electrical characteristics of the communication and medical devices. This paper presents wideband printed antennas in frequencies from 16 GH to 26 GHz for Telemonitoring, Telehealth and Telemedicine Systems. The bandwidth of the K band fractal slot antennas and arrays ranges from 10% to 40%. The electrical characteristics of the new compact antennas in the vicinity of the patient body were measured and simulated by using electromagnetic simulation techniques. The gain of the new K band fractal antennas and slot arrays presented in this paper ranges from 3 dBi to 7.5 dBi with 90% efficiency. Full article
Show Figures

Figure 1

16 pages, 319 KB  
Article
Lightweight Federated Learning Approach for Resource-Constrained Internet of Things
by M. Baqer
Sensors 2025, 25(18), 5633; https://doi.org/10.3390/s25185633 - 10 Sep 2025
Viewed by 305
Abstract
Federated learning is increasingly recognized as a viable solution for deploying distributed intelligence across resource-constrained platforms, including smartphones, wireless sensor networks, and smart home devices within the broader Internet of Things ecosystem. However, traditional federated learning approaches face serious challenges in resource-constrained settings [...] Read more.
Federated learning is increasingly recognized as a viable solution for deploying distributed intelligence across resource-constrained platforms, including smartphones, wireless sensor networks, and smart home devices within the broader Internet of Things ecosystem. However, traditional federated learning approaches face serious challenges in resource-constrained settings due to high processing demands, substantial memory requirements, and high communication overhead, rendering them impractical for battery-powered IoT environments. These factors increase battery consumption and, consequently, decrease the operational longevity of the network. This study proposes a streamlined, single-shot federated learning approach that minimizes communication overhead, enhances energy efficiency, and thereby extends network lifetime. The proposed approach leverages the k-nearest neighbors (k-NN) algorithm for edge-level pattern recognition and utilizes majority voting at the server/base station to reach global pattern recognition consensus, thereby eliminating the need for data transmissions across multiple communication rounds to achieve classification accuracy. The results indicate that the proposed approach maintains competitive classification accuracy performance while significantly reducing the required number of communication rounds. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop