Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,052)

Search Parameters:
Keywords = offloading

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1244 KB  
Article
Learning-Based Cost-Minimization Task Offloading and Resource Allocation for Multi-Tier Vehicular Computing
by Shijun Weng, Yigang Xing, Yaoshan Zhang, Mengyao Li, Donghan Li and Haoting He
Mathematics 2026, 14(2), 291; https://doi.org/10.3390/math14020291 - 13 Jan 2026
Viewed by 80
Abstract
With the fast development of the 5G technology and IoV, a vehicle has become a smart device with communication, computing, and storage capabilities. However, the limited on-board storage and computing resources often cause large latency for task processing and result in degradation of [...] Read more.
With the fast development of the 5G technology and IoV, a vehicle has become a smart device with communication, computing, and storage capabilities. However, the limited on-board storage and computing resources often cause large latency for task processing and result in degradation of system QoS as well as user QoE. In the meantime, to build the environmentally harmonious transportation system and green city, the energy consumption of data processing has become a new concern in vehicles. Moreover, due to the fast movement of IoV, traditional GSI-based methods face the dilemma of information uncertainty and are no longer applicable. To address these challenges, we propose a T2VC model. To deal with information uncertainty and dynamic offloading due to the mobility of vehicles, we propose a MAB-based QEVA-UCB solution to minimize the system cost expressed as the sum of weighted latency and power consumption. QEVA-UCB takes into account several related factors such as the task property, task arrival queue, offloading decision as well as the vehicle mobility, and selects the optimal location for offloading tasks to minimize the system cost with latency energy awareness and conflict awareness. Extensive simulations verify that, compared with other benchmark methods, our approach can learn and make the task offloading decision faster and more accurately for both latency-sensitive and energy-sensitive vehicle users. Moreover, it has superior performance in terms of system cost and learning regret. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communications with Applications)
Show Figures

Figure 1

25 pages, 705 KB  
Article
Privacy-Preserving Set Intersection Protocol Based on SM2 Oblivious Transfer
by Zhibo Guan, Hai Huang, Haibo Yao, Qiong Jia, Kai Cheng, Mengmeng Ge, Bin Yu and Chao Ma
Computers 2026, 15(1), 44; https://doi.org/10.3390/computers15010044 - 10 Jan 2026
Viewed by 139
Abstract
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their [...] Read more.
Private Set Intersection (PSI) is a fundamental cryptographic primitive in privacy-preserving computation and has been widely applied in federated learning, secure data sharing, and privacy-aware data analytics. However, most existing PSI protocols rely on RSA or standard elliptic curve cryptography, which limits their applicability in scenarios requiring domestic cryptographic standards and often leads to high computational and communication overhead when processing large-scale datasets. In this paper, we propose a novel PSI protocol based on the Chinese commercial cryptographic standard SM2, referred to as SM2-OT-PSI. The proposed scheme constructs an oblivious transfer-based Oblivious Pseudorandom Function (OPRF) using SM2 public-key cryptography and the SM3 hash function, enabling efficient multi-point OPRF evaluation under the semi-honest adversary model. A formal security analysis demonstrates that the protocol satisfies privacy and correctness guarantees assuming the hardness of the Elliptic Curve Discrete Logarithm Problem. To further improve practical performance, we design a software–hardware co-design architecture that offloads SM2 scalar multiplication and SM3 hashing operations to a domestic reconfigurable cryptographic accelerator (RSP S20G). Experimental results show that, for datasets with up to millions of elements, the presented protocol significantly outperforms several representative PSI schemes in terms of execution time and communication efficiency, especially in medium and high-bandwidth network environments. The proposed SM2-OT-PSI protocol provides a practical and efficient solution for large-scale privacy-preserving set intersection under national cryptographic standards, making it suitable for deployment in real-world secure computing systems. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing)
Show Figures

Figure 1

26 pages, 547 KB  
Article
A Two-Stage Multi-Objective Cooperative Optimization Strategy for Computation Offloading in Space–Air–Ground Integrated Networks
by He Ren and Yinghua Tong
Future Internet 2026, 18(1), 43; https://doi.org/10.3390/fi18010043 - 9 Jan 2026
Viewed by 161
Abstract
With the advancement of 6G networks, terrestrial centralized network architectures are evolving toward integrated space–air–ground network frameworks, imposing higher requirements on the efficiency of computation offloading and multi-objective collaborative optimization. However, existing single-decision strategies in integrated space–air–ground networks find it difficult to achieve [...] Read more.
With the advancement of 6G networks, terrestrial centralized network architectures are evolving toward integrated space–air–ground network frameworks, imposing higher requirements on the efficiency of computation offloading and multi-objective collaborative optimization. However, existing single-decision strategies in integrated space–air–ground networks find it difficult to achieve coordinated optimization of delay and load balancing under energy tolerance constraints during task offloading. To address this challenge, this paper integrates communication transmission and computation models to design a two-stage computation offloading model and formulates a multi-objective optimization problem under energy tolerance constraints, with the primary objectives of minimizing overall system delay and improving network load balance. To efficiently solve this constrained optimization problem, a two-stage computation offloading solution based on a Hierarchical Cooperative African Vulture Optimization Algorithm (HC-AVOA) is proposed. In the first stage, the task offloading ratio from ground devices to unmanned aerial vehicles (UAVs) is optimized; in the second stage, the task offloading ratio from UAVs to satellites is optimized. Through a hierarchical cooperative decision-making mechanism, dynamic and efficient task allocation is achieved. Simulation results show that the proposed method consistently maintains energy consumption within tolerance and outperforms PSO, WaOA, ABC, and ESOA, reduces the average delay and improves load imbalance, demonstrating its superiority in multi-objective optimization. Full article
Show Figures

Graphical abstract

24 pages, 4587 KB  
Article
A Comprehensive Physicochemical Analysis Focusing on the Characterization and Stability of Valsartan Silver Nano-Conjugates
by Abdul Qadir, Khwaja Suleman Hasan, Khair Bux, Khwaja Ali Hasan, Aamir Jalil, Asad Khan Tanoli, Khwaja Akbar Hasan, Shahida Naz, Muhammad Kashif, Nuzhat Fatima Zaidi, Ayesha Khan, Zeeshan Vohra, Herwig Ralf and Shama Qaiser
Int. J. Mol. Sci. 2026, 27(2), 582; https://doi.org/10.3390/ijms27020582 - 6 Jan 2026
Viewed by 430
Abstract
Valsartan (Val)—a lipophilic non-peptide angiotensin II type 1 receptor antagonist—is highly effective against hypertension and displaying limited solubility in water (3.08 μg/mL), thereby resulting in low oral bioavailability (23%). The limited water solubility of antihypertensive drugs can pose a challenge, particularly for rapid [...] Read more.
Valsartan (Val)—a lipophilic non-peptide angiotensin II type 1 receptor antagonist—is highly effective against hypertension and displaying limited solubility in water (3.08 μg/mL), thereby resulting in low oral bioavailability (23%). The limited water solubility of antihypertensive drugs can pose a challenge, particularly for rapid and precise administration. Herein, we synthesize and characterize valsartan-containing silver nanoparticles (Val-AgNPs) using Mangifera indica leaf extracts. The physicochemical, structural, thermal, and pharmacological properties of these nano-conjugates were established through various analytical and structural tools. The spectral shifts in both UV-visible and FTIR analyses indicate a successful interaction between the valsartan molecule and the silver nanoparticles. The resulting nano-conjugates are spherical and within the size range of 30–60 nm as revealed in scanning electron-EDS and atomic force micrographs. The log-normal distribution of valsartan-loaded nanoparticles, with a size range of 30 to 60 nm and a mode of 54 nm, indicates a narrow, monodisperse, and highly uniform particle size distribution. This is a favorable characteristic for drug delivery systems, as it leads to enhanced bioavailability and a consistent performance. Dynamic Light Scattering (DLS) analysis of the Val-AgNPs indicates a polydisperse sample with a tendency toward aggregation, resulting in larger effective sizes in the suspension compared to individual nanoparticles. The accompanying decrease in zeta potential (to −19.5 mV) and conductivity further supports the idea that the surface chemistry and stability of the nanoparticles changed after conjugation. Differential scanning calorimetry (DSC) demonstrated the melting onset of the valsartan component at 113.99 °C. The size-dependent densification of the silver nanoparticles at 286.24 °C correspond to a size range of 40–60 nm, showing a significant melting point depression compared to bulk silver due to nanoscale effects. The shift in Rf for pure valsartan to Val-AgNPs suggests that the interaction with the AgNPs alters the compound’s overall polarity and/or its interaction with the stationary phase, complimented in HPTLC and HPLC analysis. The stability and offloading behavior of Val-AgNPs was observed at pH 6–10 and in 40% and 80% MeOH. In addition, Val-AgNPs did not reveal hemolysis or significant alterations in blood cell indices, confirming the safety of the nano-conjugates for biological application. In conclusion, these findings provide a comprehensive characterization of Val-AgNPs, highlighting their potential for improved drug delivery applications. Full article
Show Figures

Figure 1

22 pages, 1308 KB  
Article
From Edge Transformer to IoT Decisions: Offloaded Embeddings for Lightweight Intrusion Detection
by Frédéric Adjewa, Moez Esseghir and Leïla Merghem-Boulahia
Sensors 2026, 26(2), 356; https://doi.org/10.3390/s26020356 - 6 Jan 2026
Viewed by 225
Abstract
The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) is enabling a new class of intelligent applications. Specifically, Large Language Models (LLMs) are emerging as powerful tools not only for natural language understanding but also for enhancing IoT security. However, [...] Read more.
The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) is enabling a new class of intelligent applications. Specifically, Large Language Models (LLMs) are emerging as powerful tools not only for natural language understanding but also for enhancing IoT security. However, the integration of these computationally intensive models into resource-constrained IoT environments presents significant challenges. This paper provides an in-depth examination of how LLMs can be adapted to secure IoT ecosystems. We identify key application areas, discuss major challenges, and propose optimization strategies for resource-limited settings. Our primary contribution is a novel collaborative embeddings offloading mechanism for IoT intrusion detection named SEED (Semantic Embeddings for Efficient Detection). This system leverages a lightweight, fine-tuned BERT model, chosen for its proven contextual and semantic understanding of sequences, to generate rich network embeddings at the edge. A compact neural network deployed on the end-device then queries these embeddings to assess network flow normality. This architecture alleviates the computational burden of running a full transformer on the device while capitalizing on its analytical performance. Our optimized BERT model is reduced by approximately 90% from its original size, now representing approximately 41 MB, suitable for the Edge. The resulting compact neural network is a mere 137 KB, appropriate for the IoT devices. This system achieves 99.9% detection accuracy with an average inference time of under 70 ms on a standard CPU. Finally, the paper discusses the ethical implications of LLM-IoT integration and evaluates the resilience of LLMs in dynamic and adversarial environments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

19 pages, 950 KB  
Article
Edge Microservice Deployment and Management Using SDN-Enabled Whitebox Switches
by Mohamad Rahhal, Lluis Gifre, Pablo Armingol Robles, Javier Mateos Najari, Aitor Zabala, Manuel Angel Jimenez, Rafael Leira Osuna, Raul Muñoz, Oscar González de Dios and Ricard Vilalta
Electronics 2026, 15(1), 246; https://doi.org/10.3390/electronics15010246 - 5 Jan 2026
Viewed by 204
Abstract
This work advances a 6G-ready, micro-granular SDN fabric that unifies high-performance edge data planes with intent-driven, multi-domain orchestration and cloud offloading. First, edge and cell-site whiteboxes are upgraded with Smart Network Interface Cards and embedded AI accelerators, enabling line-rate processing of data flows [...] Read more.
This work advances a 6G-ready, micro-granular SDN fabric that unifies high-performance edge data planes with intent-driven, multi-domain orchestration and cloud offloading. First, edge and cell-site whiteboxes are upgraded with Smart Network Interface Cards and embedded AI accelerators, enabling line-rate processing of data flows and on-box learning/inference directly in the data plane. This pushes functions such as traffic classification, telemetry, and anomaly mitigation to the point of ingress, reducing latency and backhaul load. Second, an SDN controller, i.e., ETSI TeraFlowSDN, is extended to deliver multi-domain SDN orchestration with native lifecycle management (LCM) for whitebox Network Operating Systems—covering onboarding, configuration-drift control, rolling upgrades/rollbacks, and policy-guarded compliance—so operators can reliably manage heterogeneous edge fleets at scale. Third, the SDN controller incorporates a new NFV-O client that seamlessly offloads network services—such as ML pipelines or NOS components—to telco clouds via an NFV orchestrator (e.g., ETSI Open Source MANO), enabling elastic placement and scale-out across the edge–cloud continuum. Together, these contributions deliver an open, programmable platform that couples in-situ acceleration with closed-loop, intent-based orchestration and elastic cloud resources, targeting demonstrable gains in end-to-end latency, throughput, operational agility, and energy efficiency for emerging 6G services. Full article
(This article belongs to the Special Issue Optical Networking and Computing)
Show Figures

Figure 1

19 pages, 684 KB  
Article
Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems
by Salman Khan, Ibrar Ali Shah, Woong-Kee Loh, Javed Ali Khan, Alexios Mylonas and Nikolaos Pitropakis
Sensors 2026, 26(1), 348; https://doi.org/10.3390/s26010348 - 5 Jan 2026
Viewed by 306
Abstract
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due [...] Read more.
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

42 pages, 5531 KB  
Article
DRL-TinyEdge: Energy- and Latency-Aware Deep Reinforcement Learning for Adaptive TinyML at the 6G Edge
by Saad Alaklabi and Saleh Alharbi
Future Internet 2026, 18(1), 31; https://doi.org/10.3390/fi18010031 - 4 Jan 2026
Viewed by 419
Abstract
Various TinyML models face a constantly challenging environment when running on emerging sixth-generation (6G) edge networks, with volatile wireless environments, limited computing power, and highly constrained energy use. This paper introduces DRL-TinyEdge, a latency- and energy-sensitive deep reinforcement learning (DRL) platform optimised for [...] Read more.
Various TinyML models face a constantly challenging environment when running on emerging sixth-generation (6G) edge networks, with volatile wireless environments, limited computing power, and highly constrained energy use. This paper introduces DRL-TinyEdge, a latency- and energy-sensitive deep reinforcement learning (DRL) platform optimised for the 6G edge of adaptive TinyML. The suggested on-device DRL controller autonomously decides on the execution venue (local, partial, or cloud) and model configuration (depth, quantization, and frequency) in real time to trade off accuracy, latency, and power savings. To assure safety during adaptation to changing conditions, the multi-objective reward will be a combination of p95 latency, per-inference energy, preservation of accuracy and policy stability. The system is tested under two workloads representative of classical applications, including image classification (CIFAR-10) and sensor analytics in an industrial IoT system, on a low-power platform (ESP32, Jetson Nano) connected to a simulated 6G mmWave testbed. Findings indicate uniform improvements, with up to a 28 per cent decrease in p95 latency and a 43 per cent decrease in energy per inference, and with accuracy differences of less than 1 per cent compared to baseline models. DRL-TinyEdge offers better adaptability, stability, and scalability when using a CPU < 5 and a decision latency < 10 ms, compared to Static-Offload, Heuristic-QoS, or TinyNAS/QAT. Code, hyperparameter settings, and measurement programmes will also be published at the time of acceptance to enable reproducibility and open benchmarking. Full article
Show Figures

Figure 1

22 pages, 3874 KB  
Article
Cloud-Edge Collaboration-Based Data Processing Method for Distribution Terminal Unit Edge Clusters
by Ruijiang Zeng, Zhiyong Li, Sifeng Li, Jiahao Zhang and Xiaomei Chen
Energies 2026, 19(1), 269; https://doi.org/10.3390/en19010269 - 4 Jan 2026
Viewed by 167
Abstract
Distribution terminal units (DTUs) play critical roles in smart grid for supporting data acquisition, remote monitoring, and fault management. A single DTU generates continuous data streams, imposing new challenges on data processing. To tackle these issues, a cloud-edge collaboration-based data processing method is [...] Read more.
Distribution terminal units (DTUs) play critical roles in smart grid for supporting data acquisition, remote monitoring, and fault management. A single DTU generates continuous data streams, imposing new challenges on data processing. To tackle these issues, a cloud-edge collaboration-based data processing method is introduced for DTU edge clusters. First, considering the load imbalance degree of DTU data queues, a cloud-edge integrated data processing architecture is designed. It optimizes edge server selection, the offloading splitting ratio, and edge-cloud computing resource allocation in a collaboration mechanism. Second, an optimization problem is formulated to maximize the weighted difference between the total data processing volume and the load imbalance degree. Next, a cloud-edge collaboration-based data processing method is proposed. In the first stage, cloud-edge collaborative data offloading based on the load imbalance degree, and a data volume-aware deep Q-network (DQN) is developed. A penalty function based on load fluctuations and the data volume deficit is incorporated. It drives the DQN to evolve toward suppressing the fluctuation of load imbalance degree while ensuring differentiated long-term data volume constraints. In the second stage, cloud-edge computing resource allocation based on adaptive differential evolution is designed. An adaptive mutation scaling factor is introduced to overcome the gene overlapping issues of traditional heuristic approaches, enabling deeper exploration of the solution space and accelerating global optimum identification. Finally, the simulation results demonstrate that the proposed method effectively improves the data processing efficiency of DTUs while reducing the load imbalance degree. Full article
Show Figures

Figure 1

20 pages, 1602 KB  
Article
Low-Latency Oriented Joint Data Compression and Resource Allocation in NOMA-MEC Networks: A Deep Reinforcement Learning Approach
by Fangqing Tan, Yu Zeng, Chao Lan and Zou Zhou
Sensors 2026, 26(1), 285; https://doi.org/10.3390/s26010285 - 2 Jan 2026
Viewed by 256
Abstract
To alleviate communication pressure and terminal resource constraints in mobile edge computing (MEC) networks, this paper proposes a resource allocation optimization method for MEC systems that integrates data compression technology and non-orthogonal multiple access technology. This method considers practical constraints such as terminal [...] Read more.
To alleviate communication pressure and terminal resource constraints in mobile edge computing (MEC) networks, this paper proposes a resource allocation optimization method for MEC systems that integrates data compression technology and non-orthogonal multiple access technology. This method considers practical constraints such as terminal device battery capacity and computational resource limitations. By jointly optimizing computational resource allocation, task offloading strategies, and data compression ratios, it constructs an optimization model aimed at minimizing the total task processing latency. Addressing the challenges stemming from the non-convex nature of the problem and the dynamic variations in channel conditions and task requirements, this paper proposes a softmax deep double deterministic policy gradient algorithm, where softmax operator function mitigates both overestimation and underestimation biases inherent in traditional reinforcement learning frameworks, enhancing convergence performance. Utilizing a deep reinforcement learning framework, the algorithm achieves joint decision-making optimization for computational resources, task offloading, and compression ratios, thereby minimizing the total task processing latency while satisfying transmit power and computational resource constraints. Simulation results demonstrate that the proposed scheme exhibits significant advantages over benchmark algorithms in terms of convergence speed and task processing latency. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

22 pages, 7712 KB  
Article
Adaptive Edge Intelligent Joint Optimization of UAV Computation Offloading and Trajectory Under Time-Varying Channels
by Jinwei Xie and Dimin Xie
Drones 2026, 10(1), 21; https://doi.org/10.3390/drones10010021 - 31 Dec 2025
Viewed by 234
Abstract
With the rapid development of mobile edge computing (MEC) and unmanned aerial vehicle (UAV) communication networks, UAV-assisted edge computing has emerged as a promising paradigm for low-latency and energy-efficient computation. However, the time-varying nature of air-to-ground channels and the coupling between UAV trajectories [...] Read more.
With the rapid development of mobile edge computing (MEC) and unmanned aerial vehicle (UAV) communication networks, UAV-assisted edge computing has emerged as a promising paradigm for low-latency and energy-efficient computation. However, the time-varying nature of air-to-ground channels and the coupling between UAV trajectories and computation offloading decisions significantly increase system complexity. To address these challenges, this paper proposes an Adaptive UAV Edge Intelligence Framework (AUEIF) for joint UAV computation offloading and trajectory optimization under dynamic channels. Specifically, a dynamic graph-based system model is constructed to characterize the spatio-temporal correlation between UAV motion and channel variations. A hierarchical reinforcement learning-based optimization framework is developed, in which a high-level actor–critic module is responsible for generating coarse-grained UAV flight trajectories, while a low-level deep Q-network performs fine-grained optimization of task offloading ratios and computational resource allocation in real time. In addition, an adaptive channel prediction module leveraging long short-term memory (LSTM) networks is integrated to model temporal channel state transitions and to assist policy learning and updates. Extensive simulation results demonstrate that the proposed AUEIF achieves significant improvements in end-to-end latency, energy efficiency, and overall system stability compared with conventional deep reinforcement learning approaches and heuristic-based schemes while exhibiting strong robustness against dynamic and fluctuating wireless channel conditions. Full article
(This article belongs to the Special Issue Advances in AI Large Models for Unmanned Aerial Vehicles)
Show Figures

Figure 1

29 pages, 1050 KB  
Article
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC
by Wukjae Cha, Hyang Jin Lee, Sangjin Kook, Keunok Kim and Dongho Won
Sensors 2026, 26(1), 217; https://doi.org/10.3390/s26010217 - 29 Dec 2025
Viewed by 329
Abstract
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential [...] Read more.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol’s robustness under the Dolev–Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA’s resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF–cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

32 pages, 907 KB  
Article
Performance Analysis of Uplink Opportunistic Scheduling for Multi-UAV-Assisted Internet of Things
by Long Suo, Zhichu Zhang, Lei Yang and Yunfei Liu
Drones 2026, 10(1), 18; https://doi.org/10.3390/drones10010018 - 28 Dec 2025
Viewed by 309
Abstract
Due to the high mobility, flexibility, and low cost, unmanned aerial vehicles (UAVs) can provide an efficient way for provisioning data communication and computing offloading services for massive Internet of Things (IoT) devices, especially in remote areas with limited infrastructure. However, current transmission [...] Read more.
Due to the high mobility, flexibility, and low cost, unmanned aerial vehicles (UAVs) can provide an efficient way for provisioning data communication and computing offloading services for massive Internet of Things (IoT) devices, especially in remote areas with limited infrastructure. However, current transmission schemes for unmanned aerial vehicle-assisted Internet of Things (UAV-IoT) predominantly employ polling scheduling, thus not fully exploiting the potential multiuser diversity gains offered by a vast number of IoT nodes. Furthermore, conventional opportunistic scheduling (OS) or opportunistic beamforming techniques are predominantly designed for downlink transmission scenarios. When applied directly to uplink IoT data transmission, these methods can incur excessive uplink training overhead. To address these issues, this paper first proposes a low-overhead multi-UAV uplink OS framework based on channel reciprocity. To avoid explicit massive uplink channel estimation, two scheduling criteria are designed: minimum downlink interference (MDI) and the maximum downlink signal-to-interference-plus-noise ratio (MD-SINR). Second, for a dual-UAV deployment scenario over Rayleigh block fading channels, we derive closed-form expressions for both the average sum rate and the asymptotic sum rate based on the MDI criterion. A degrees-of-freedom (DoF) analysis demonstrates that when the number of sensors, K, scales as ρα, the system can achieve a total of 2α DoF, where α0,1 is the user-scaling factor and ρ is the transmitted signal-to-noise ratio (SNR). Third, for a three-UAV deployment scenario, the Gamma distribution is employed to approximate the uplink interference, thereby yielding a tractable expression for the average sum rate. Simulations confirm the accuracy of the performance analysis for both dual- and three-UAV deployments. The normalized error between theoretical and simulation results falls below 1% for K > 30. Furthermore, the impact of fading severity on the system’s sum rate and DoF performance is systematically evaluated via simulations under Nakagami-m fading channels. The results indicate that more severe fading (a smaller m) yields greater multiuser diversity gain. Both the theoretical and simulation results consistently show that within the medium-to-high SNR regime, the dual-UAV deployment outperforms both the single-UAV and three-UAV schemes in both Rayleigh and Nakagami-m channels. This study provides a theoretical foundation for the adaptive deployment and scheduling design of UAV-assisted IoT uplink systems under various fading environments. Full article
Show Figures

Figure 1

22 pages, 2232 KB  
Article
A Dynamic Offloading Strategy Based on Optimal Stopping Theory in Vehicle-to-Vehicle Communication Scenarios
by An Li, Jiaxuan Ling, Yeqiang Zheng, Mingliang Chen and Gaocai Wang
Future Internet 2026, 18(1), 18; https://doi.org/10.3390/fi18010018 - 28 Dec 2025
Viewed by 180
Abstract
Faced with the access of a large number of devices, and for mobile vehicles with high speeds, some situations may be far from the communication range of the current edge node, resulting in a significant increase in communication latency and energy consumption. To [...] Read more.
Faced with the access of a large number of devices, and for mobile vehicles with high speeds, some situations may be far from the communication range of the current edge node, resulting in a significant increase in communication latency and energy consumption. To ensure the effectiveness of task execution for mobile vehicles under high-speed conditions, this paper regards intelligent vehicles as edge nodes and establishes a dynamic offloading model in Vehicle-to-Vehicle (V2V) scenarios. A dynamic task offloading strategy based on optimal stopping theory is proposed to minimize the overall latency generated during the offloading process while ensuring the effectiveness of task execution. By analyzing the potential migration paths of tasks in V2V scenarios, we construct a dynamic migration model and design a migration benefit function, transforming the problem into an asset-selling problem in optimal stopping theory (OST). At the same time, it is proven that there exists an optimal stopping rule for the problem. Finally, the optimal migration threshold is determined by solving the optimal stopping rule through dynamic programming, guiding the task vehicle to choose the best target service vehicle. Comparisons between the proposed TMS-OST strategy and three other peer offloading strategies show that TMS-OST can significantly reduce the total offloading latency, select service vehicles with shorter distances using fewer detection attempts, guarantee service quality while lowering detection costs, and achieve high average offloading efficiency and average offloading distance efficiency. Full article
Show Figures

Figure 1

24 pages, 3856 KB  
Article
MA-PF-AD3PG: A Multi-Agent DRL Algorithm for Latency Minimization and Fairness Optimization in 6G IoV-Oriented UAV-Assisted MEC Systems
by Yitian Wang, Hui Wang and Haibin Yu
Drones 2026, 10(1), 9; https://doi.org/10.3390/drones10010009 - 25 Dec 2025
Viewed by 257
Abstract
The rapid proliferation of connected and autonomous vehicles in the 6G era demands ultra-reliable and low-latency computation with intelligent resource coordination. Unmanned Aerial Vehicle (UAV)-assisted Mobile Edge Computing (MEC) provides a flexible and scalable solution to extend coverage and enhance offloading efficiency for [...] Read more.
The rapid proliferation of connected and autonomous vehicles in the 6G era demands ultra-reliable and low-latency computation with intelligent resource coordination. Unmanned Aerial Vehicle (UAV)-assisted Mobile Edge Computing (MEC) provides a flexible and scalable solution to extend coverage and enhance offloading efficiency for dynamic Internet of Vehicles (IoV) environments. However, jointly optimizing task latency, user fairness, and service priority under time-varying channel conditions remains a fundamental challenge.To address this issue, this paper proposes a novel Multi-Agent Priority-based Fairness Adaptive Delayed Deep Deterministic Policy Gradient (MA-PF-AD3PG) algorithm for UAV-assisted MEC systems. An occlusion-aware dynamic deadline model is first established to capture real-time link blockage and channel fading. Based on this model, a priority–fairness coupled optimization framework is formulated to jointly minimize overall latency and balance service fairness across heterogeneous vehicular tasks. To efficiently solve this NP-hard problem, the proposed MA-PF-AD3PG integrates fairness-aware service preprocessing and an adaptive delayed update mechanism within a multi-agent deep reinforcement learning structure, enabling decentralized yet coordinated UAV decision-making. Extensive simulations demonstrate that MA-PF-AD3PG achieves superior convergence stability, 13–57% higher total rewards, up to 46% lower delay, and nearly perfect fairness compared with state-of-the-art Deep Reinforcement Learning (DRL) and heuristic methods. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

Back to TopTop