Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (227)

Search Parameters:
Keywords = user equipment (UE)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1887 KiB  
Review
Comparative Analysis of Beamforming Techniques and Beam Management in 5G Communication Systems
by Cristina Maria Andras, Gordana Barb and Marius Otesteanu
Sensors 2025, 25(15), 4619; https://doi.org/10.3390/s25154619 - 25 Jul 2025
Viewed by 526
Abstract
The advance of 5G technology marks a significant evolution in wireless communications, characterized by ultra-high data rates, low latency, and massive connectivity across varied areas. A fundamental enabler of these capabilities is represented by beamforming, an advanced signal processing technique that focuses radio [...] Read more.
The advance of 5G technology marks a significant evolution in wireless communications, characterized by ultra-high data rates, low latency, and massive connectivity across varied areas. A fundamental enabler of these capabilities is represented by beamforming, an advanced signal processing technique that focuses radio energy to a specific user equipment (UE), thereby enhancing signal quality—crucial for maximizing spectral efficiency. The work presents a classification of beamforming techniques, categorized according to the implementation within 5G New Radio (NR) architectures. Furthermore, the paper investigates beam management (BM) procedures, which are essential Layer 1 and Layer 2 mechanisms responsible for the dynamic configuration, monitoring, and maintenance of optimal beam pair links between gNodeBs and UEs. The article emphasizes the spectral spectrogram of Synchronization Signal Blocks (SSBs) generated under various deployment scenarios, illustrating how parameters such as subcarrier spacing (SCS), frequency band, and the number of SSBs influence the spectral occupancy and synchronization performance. These insights provide a technical foundation for optimizing initial access and beam tracking in high-frequency 5G deployments, particularly within Frequency Range (FR2). Additionally, the versatility of 5G’s time-frequency structure is demonstrated by the spectrogram analysis of SSBs in a variety of deployment scenarios. These results provide insight into how different configurations affect the synchronization signals’ temporal and spectral occupancy, which directly affects initial access, cell identification, and energy efficiency. Full article
Show Figures

Figure 1

18 pages, 1643 KiB  
Communication
A Localization Enhancement Method Based on Direct-Path Identification and Tracking for Future Networks
by Yuhong Huang and Youping Zhao
Sensors 2025, 25(15), 4538; https://doi.org/10.3390/s25154538 - 22 Jul 2025
Viewed by 249
Abstract
Localization is one of the essential problems in the Internet of Things (IoT). Dynamic changes in the radio environment may lead to poor localization accuracy or discontinuous localization in non-line-of-sight (NLOS) scenarios. To address this problem, this paper proposes a localization enhancement method [...] Read more.
Localization is one of the essential problems in the Internet of Things (IoT). Dynamic changes in the radio environment may lead to poor localization accuracy or discontinuous localization in non-line-of-sight (NLOS) scenarios. To address this problem, this paper proposes a localization enhancement method based on direct-path identification and tracking. More specifically, the proposed method significantly reduces the range error and localization error by quickly identifying the line-of-sight (LOS) to NLOS transition and effectively tracking the direct path. In a large testing hall, localization experiments based on the ultra-wideband (UWB) signal have been carried out. Experimental results show that the proposed method achieves a root mean square localization error of less than 0.3 m along the user equipment (UE) movement trajectory with serious NLOS propagation conditions. Compared with conventional methods, the proposed method significantly improves localization accuracy while ensuring continuous localization. Full article
Show Figures

Figure 1

21 pages, 4537 KiB  
Article
Evaluation of 5G Positioning Based on Uplink SRS and Downlink PRS Under LOS and NLOS Environments
by Syed Shahid Shah, Chao Sun, Dongkai Yang, Muhammad Wisal, Yingzhe He, Bai Lu and Ying Xu
Appl. Sci. 2025, 15(14), 7909; https://doi.org/10.3390/app15147909 - 15 Jul 2025
Viewed by 399
Abstract
The evolution of 5G technology has led to significant advancements in high-accuracy positioning. However, the actual performance of 5G signals for user equipment (UE) positioning has not been thoroughly examined, especially under varying propagation conditions. This research presents a comprehensive evaluation of 5G [...] Read more.
The evolution of 5G technology has led to significant advancements in high-accuracy positioning. However, the actual performance of 5G signals for user equipment (UE) positioning has not been thoroughly examined, especially under varying propagation conditions. This research presents a comprehensive evaluation of 5G positioning using both uplink sounding reference signals (UL-SRS) and downlink positioning reference signals (DL-PRS) under line-of-sight (LOS) and non-line-of-sight (NLOS) conditions. In the uplink scenario, the UE transmits SRS signals to the gNBs, enabling precise localization. In the downlink scenario, the gNBs transmit PRS signals to the UE for accurate position estimation. Expanding beyond LOS environments, this study explores the challenges posed by NLOS conditions and analyzes their impact on positioning accuracy. Through a comparative analysis of UL-SRS and DL-PRS signals, this study enhances the current understanding of 5G positioning performance, offering empirical insights and quantitative benchmarks that serve as a guide for the development of more precise localization methods. The simulation results show that DL-PRS achieves high accuracy in LOS conditions, while UL-SRS performs well for UE positioning under NLOS conditions in urban environments. Full article
Show Figures

Figure 1

27 pages, 3015 KiB  
Article
Intelligent Handover Decision-Making for Vehicle-to-Everything (V2X) 5G Networks
by Faiza Rashid Ammar Al Harthi, Abderezak Touzene, Nasser Alzidi and Faiza Al Salti
Telecom 2025, 6(3), 47; https://doi.org/10.3390/telecom6030047 - 2 Jul 2025
Viewed by 460
Abstract
Fifth-generation Vehicle-to-Everything (V2X) networks have ushered in a new set of challenges that negatively affect seamless connectivity, specifically owing to high user equipment (UE) mobility and high density. As UE accelerates, there are frequent transitions from one cell to another, and handovers (HOs) [...] Read more.
Fifth-generation Vehicle-to-Everything (V2X) networks have ushered in a new set of challenges that negatively affect seamless connectivity, specifically owing to high user equipment (UE) mobility and high density. As UE accelerates, there are frequent transitions from one cell to another, and handovers (HOs) are triggered by network performance metrics, including latency, higher energy consumption, and greater packet loss. Traditional HO mechanisms fail to handle such network conditions, requiring the development of Intelligent HO Decisions for V2X (IHD-V2X). By leveraging Q-Learning, the intelligent mechanism seamlessly adapts to real-time network congestion and varying UE speeds, thereby resulting in efficient handover decisions. Based on the results, IHD-V2X significantly outperforms the other mechanisms in high-density and high-mobility networks. This results in a reduction of 73% in unnecessary handover operations, and an 18% reduction in effective energy consumption. On the other hand, it improved handover success rates by 80% from the necessary handover and lowered packet loss for high mobility UE by 73%. The latency was kept at a minimum of 22% for application-specific requirements. The proposed intelligent approach is particularly effective for high-mobility situations and ultra-dense networks, where excessive handovers can degrade user experience. Full article
Show Figures

Figure 1

13 pages, 8706 KiB  
Article
Experimental Studies on Low-Latency RIS Beam Tracking: Edge-Integrated and Visually Steered
by Zekai Wang and Yuming Nie
Network 2025, 5(3), 22; https://doi.org/10.3390/network5030022 - 1 Jul 2025
Viewed by 268
Abstract
In this study, to address the problems of high feedback latency and redundant codebook traversal in traditional Reconfigurable Intelligent Surface (RIS) beam tracking systems, two novel experimental schemes are proposed: the Edge-Integrated RIS Control Mechanism (EIR-CM) and the Visually Steered RIS Control Mechanism [...] Read more.
In this study, to address the problems of high feedback latency and redundant codebook traversal in traditional Reconfigurable Intelligent Surface (RIS) beam tracking systems, two novel experimental schemes are proposed: the Edge-Integrated RIS Control Mechanism (EIR-CM) and the Visually Steered RIS Control Mechanism (VSR-CM). The EIR-CM eliminates the feedback latency of the remote server and optimizes the local computation by integrating the RIS control system and the User Equipment (UE) into the same edge server to reduce the beam tuning time by 50%. The VSR-CM realizes beam tracking based on visual perception, and directly maps the UE position to the optimal RIS codebook with a response speed as low as milliseconds. Experimental results show that the EIR-CM reduces the RIS feedback latency to 1–2 s, and the VSR-CM can be further optimized to less than 0.5 s. The two mechanisms are applicable to 6G communications, smart transport, and drone networks, providing feasibility verification for low-latency and efficient RIS deployment. Full article
(This article belongs to the Special Issue Advances in Wireless Communications and Networks)
Show Figures

Figure 1

19 pages, 744 KiB  
Article
Three-Dimensional Trajectory Optimization for UAV-Based Post-Disaster Data Collection
by Renkai Zhao and Gia Khanh Tran
J. Sens. Actuator Netw. 2025, 14(3), 63; https://doi.org/10.3390/jsan14030063 - 16 Jun 2025
Viewed by 596
Abstract
In Japan, natural disasters occur frequently. Serious disasters may cause damage to traffic networks and telecommunication infrastructures, leading to the occurrence of isolated disaster areas. In this article, unmanned aerial vehicles (UAVs) are used for data collection instead of unavailable ground-based stations in [...] Read more.
In Japan, natural disasters occur frequently. Serious disasters may cause damage to traffic networks and telecommunication infrastructures, leading to the occurrence of isolated disaster areas. In this article, unmanned aerial vehicles (UAVs) are used for data collection instead of unavailable ground-based stations in isolated disaster areas. Detailed information about the damage situation will be collected from the user equipment (UE) by a UAV through a fly–hover–fly procedure, and then will be sent to the disaster response headquarters for disaster relief. However, mission completion time minimization becomes a crucial task, considering the requirement of rapid response and the battery constraint of UAVs. Therefore, the author proposed a three-dimensional UAV flight trajectory, discussing the optimal flight altitude and placement of hovering points by transforming the original problem of K-means clustering into a location set cover problem (LSCP) that can be solved via a genetic algorithm (GA) approach. The simulation results have shown the feasibility of the proposed method to reduce the mission completion time. Full article
Show Figures

Figure 1

21 pages, 1560 KiB  
Article
Energy-Efficient Deployment Simulator of UAV-Mounted Base Stations Under Dynamic Weather Conditions
by Gyeonghyeon Min and Jaewoo So
Sensors 2025, 25(12), 3648; https://doi.org/10.3390/s25123648 - 11 Jun 2025
Viewed by 356
Abstract
In unmanned aerial vehicle (UAV)-mounted base station (MBS) networks, user equipment (UE) experiences dynamic channel variations because of the mobility of the UAV and the changing weather conditions. In order to overcome the degradation in the quality of service (QoS) of the UE [...] Read more.
In unmanned aerial vehicle (UAV)-mounted base station (MBS) networks, user equipment (UE) experiences dynamic channel variations because of the mobility of the UAV and the changing weather conditions. In order to overcome the degradation in the quality of service (QoS) of the UE due to channel variations, it is important to appropriately determine the three-dimensional (3D) position and transmission power of the base station (BS) mounted on the UAV. Moreover, it is also important to account for both geographical and meteorological factors when deploying UAV-MBSs because they service ground UE in various regions and atmospheric environments. In this paper, we propose an energy-efficient UAV-MBS deployment scheme in multi-UAV-MBS networks using a hybrid improved simulated annealing–particle swarm optimization (ISA-PSO) algorithm to find the 3D position and transmission power of each UAV-MBS. Moreover, we developed a simulator for deploying UAV-MBSs, which took the dynamic weather conditions into consideration. The proposed scheme for deploying UAV-MBSs demonstrated superior performance, where it achieved faster convergence and higher stability compared with conventional approaches, making it well suited for practical deployment. The developed simulator integrates terrain data based on geolocation and real-time weather information to produce more practical results. Full article
(This article belongs to the Special Issue Energy-Efficient Communication Networks and Systems: 2nd Edition)
Show Figures

Figure 1

20 pages, 669 KiB  
Article
Interference Management in UAV-Assisted Multi-Cell Networks
by Muchen Jiang, Honglin Ren, Yongxing Qi and Ting Wu
Information 2025, 16(6), 481; https://doi.org/10.3390/info16060481 - 10 Jun 2025
Viewed by 545
Abstract
This article considers a multi-cell wireless network comprising of conventional user equipment (UE), sensor devices and unmanned aerial vehicles (UAVs) or drones. UAVs are used to assist a base station, e.g., improve coverage or collect data from sensor devices. The problem at hand [...] Read more.
This article considers a multi-cell wireless network comprising of conventional user equipment (UE), sensor devices and unmanned aerial vehicles (UAVs) or drones. UAVs are used to assist a base station, e.g., improve coverage or collect data from sensor devices. The problem at hand is to optimize the (i) sub-carrier assigned to a cell or base station, (ii) position of each UAV, and (iii) transmit power of the following nodes: base stations and UAVs. We outline a two-stage approach to maximize the fairness-aware sum-rate of UE and UAVs. In the first stage, a genetic algorithm (GA)-based approach is used to assign a sub-band to all cells and to determine the location of each UAV. Then, in the second stage, a linear program is used to determine the transmit power of UE and UAVs. The results demonstrate that our proposed two-stage approach achieves approximately 97.43% of the optimal fairness-aware sum-rate obtained via brute-force search. It also attains on average 98.78% of the performance of a computationally intensive benchmark that requires over 478% longer run-time. Furthermore, it outperforms a conventional GA-based sub-band allocation heuristic by 221.39%. Full article
Show Figures

Figure 1

17 pages, 2256 KiB  
Article
Scalable Statistical Channel Estimation and Its Applications in User-Centric Cell-Free Massive MIMO Systems
by Ling Xing, Dongle Wang, Xiaohui Zhang, Honghai Wu and Kaikai Deng
Sensors 2025, 25(11), 3263; https://doi.org/10.3390/s25113263 - 22 May 2025
Viewed by 469
Abstract
Cell-free massive multiple-input multiple-output (mMIMO) technology utilizes collaborative signal processing to significantly improve system performance. In cell-free mMIMO systems, accurate channel state information (CSI) is a key element in improving the overall system performance. The existing statistical CSI acquisition methods for large-scale fading [...] Read more.
Cell-free massive multiple-input multiple-output (mMIMO) technology utilizes collaborative signal processing to significantly improve system performance. In cell-free mMIMO systems, accurate channel state information (CSI) is a key element in improving the overall system performance. The existing statistical CSI acquisition methods for large-scale fading (LSF) processing schemes assume that each access points (APs) provides service to all user equipments (UEs) in the system. However, as the number of UEs or APs increases, the computational complexity of statistical CSI estimation tends to infinity, which is not scalable in large-scale networks. To address this limitation, this paper proposes a scalable statistical CSI estimation method under the user-centric cell-free mMIMO system, which blindly estimates the partial statistical CSI required for LSF schemes using uplink (UL) data signals. Additionally, the estimated partial statistical CSI can also be used for downlink (DL) LSF precoding (LSFP) or power control in fully distributed precoding. Simulation results show that under the LSFP scheme, the proposed method can achieve comparable spectral efficiency (SE) with the traditional CSI acquisition scheme while ensuring scalability. When applied to power control in fully distributed precoding, it significantly reduces the fronthaul link CSI overhead while maintaining a nearly similar SE performance compared to existing solutions. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

26 pages, 1513 KiB  
Article
Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems
by Hanchao Mu, Shie Wu, Pengfei He, Jiahui Chen and Wenqing Wu
Sensors 2025, 25(10), 3172; https://doi.org/10.3390/s25103172 - 17 May 2025
Viewed by 410
Abstract
As an emerging paradigm for supporting computation-intensive and latency-sensitive services, mobile edge computing (MEC) faces significant challenges in terms of efficient resource utilization and intelligent task coordination among heterogeneous user equipment (UE), especially in dense MEC scenarios with severe interference. Generally, task similarity [...] Read more.
As an emerging paradigm for supporting computation-intensive and latency-sensitive services, mobile edge computing (MEC) faces significant challenges in terms of efficient resource utilization and intelligent task coordination among heterogeneous user equipment (UE), especially in dense MEC scenarios with severe interference. Generally, task similarity and cooperation opportunities among UE are usually ignored in existing studies when dealing with reusable tasks. In this paper, we investigate the problem of cooperative computation offloading and resource allocation for reusable tasks, with a focus on minimizing the energy consumption of UE while ensuring delay limits. The problem is formulated as an intractable mixed-integer nonlinear programming (MINLP) problem, and we design a similarity-based cooperative offloading and resource allocation (SCORA) algorithm to obtain a solution. Specifically, the proposed SCORA algorithm decomposes the original problem into three subproblems, i.e., task offloading, resource allocation, and power allocation, which are solved using a similarity-based matching offloading algorithm, a cooperative-based resources allocation algorithm, and a concave–convex procedure (CCCP)-based power allocation algorithm, respectively. Simulation results show that compared to the benchmark schemes, the SCORA scheme can reduce energy consumption by up to 51.52% while maintaining low latency. Moreover, the energy of UE with low remaining energy levels is largely saved. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

17 pages, 421 KiB  
Article
CNN-Based End-to-End CPU-AP-UE Power Allocation for Spectral Efficiency Enhancement in Cell-Free Massive MIMO Networks
by Yoon-Ju Choi, Ji-Hee Yu, Seung-Hwan Seo, Seong-Gyun Choi, Hye-Yoon Jeong, Ja-Eun Kim, Myung-Sun Baek, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2025, 13(9), 1442; https://doi.org/10.3390/math13091442 - 28 Apr 2025
Viewed by 576
Abstract
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in [...] Read more.
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in cell-free massive MIMO networks, multiple access points cooperatively serve user equipment (UEs), effectively mitigating these issues. Beamforming and cooperative transmission among APs are essential in massive MIMO environments, making efficient power allocation a critical factor in determining overall network performance. In particular, considering power allocation from the central processing unit (CPU) to the APs enables optimal power utilization across the entire network. Traditional power allocation methods such as equal power allocation and max–min power allocation fail to fully exploit the cooperative characteristics of APs, leading to suboptimal network performance. To address this limitation, in this study we propose a convolutional neural network (CNN)-based power allocation model that optimizes both CPU-to-AP power allocation and AP-to-UE power distribution. The proposed model learns the optimal power allocation strategy by utilizing the channel state information, AP-UE distance, interference levels, and signal-to-interference-plus-noise ratio as input features. Simulation results demonstrate that the proposed CNN-based power allocation method significantly improves spectral efficiency compared to conventional power allocation techniques while also enhancing energy efficiency. This confirms that deep learning-based power allocation can effectively enhance network performance in cell-free massive MIMO environments. Full article
Show Figures

Figure 1

21 pages, 821 KiB  
Article
Task Offloading and Data Compression Collaboration Optimization for UAV Swarm-Enabled Mobile Edge Computing
by Zhijuan Hu, Shuangyu Liu, Dongsheng Zhou, Chao Shen and Tingting Wang
Drones 2025, 9(4), 288; https://doi.org/10.3390/drones9040288 - 9 Apr 2025
Viewed by 622
Abstract
The combination of Unmanned Aerial Vehicles (UAVs) and Mobile Edge Computing (MEC) effectively meets the demands of user equipments (UEs) for high-quality computing services, low energy consumption, and low latency. However, in complex environments such as disaster rescue scenarios, a single UAV is [...] Read more.
The combination of Unmanned Aerial Vehicles (UAVs) and Mobile Edge Computing (MEC) effectively meets the demands of user equipments (UEs) for high-quality computing services, low energy consumption, and low latency. However, in complex environments such as disaster rescue scenarios, a single UAV is still constrained by limited transmission power and computing resources, making it difficult to efficiently complete computational tasks. To address this issue, we propose a UAV swarm-enabled MEC system that integrates data compression technology, in which the only swarm head UAV (USH) offloads the compressed computing tasks compressed by the UEs and partially distributes them to the swarm member UAV (USM) for collaborative processing. To minimize the total energy and time cost of the system, we utilize Markov Decision Process (MDP) for modeling and construct a deep deterministic policy gradient offloading algorithm with a prioritized experience replay mechanism (PER-DDPG) to jointly optimize compression ratio, task offloading rate, resource allocation and swarm positioning. Simulation results show that compared with deep Q-network (DQN) and deep deterministic policy gradient (DDPG) baseline algorithms, the proposed scheme performs excellently in terms of convergence and robustness, reducing system latency and energy consumption by about 32.7%. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicle Swarm-Enabled Edge Computing)
Show Figures

Figure 1

18 pages, 1834 KiB  
Article
Location-Based Handover with Particle Filter and Reinforcement Learning (LBH-PRL) for Mobility and Service Continuity in Non-Terrestrial Networks (NTN)
by Li-Sheng Chen, Shu-Han Liao and Hsin-Hung Cho
Electronics 2025, 14(8), 1494; https://doi.org/10.3390/electronics14081494 - 8 Apr 2025
Viewed by 693
Abstract
In high-mobility non-terrestrial networks (NTN), the reference signal received power (RSRP)-based handover (RBH) mechanism is often unsuitable due to its limitations in handling dynamic satellite movements. RSRP, a key metric in cellular networks, measures the received power of reference signals [...] Read more.
In high-mobility non-terrestrial networks (NTN), the reference signal received power (RSRP)-based handover (RBH) mechanism is often unsuitable due to its limitations in handling dynamic satellite movements. RSRP, a key metric in cellular networks, measures the received power of reference signals from a base station or satellite and is widely used for handover decision-making. However, in NTN environments, the high mobility of satellites causes frequent RSRP fluctuations, making RBH ineffective in managing handovers, often leading to excessive ping-pong handovers and a high handover failure rate. To address this challenge, we propose an innovative approach called location-based handover with particle filter and reinforcement learning (LBH-PRL). This approach integrates a particle filter to estimate the distance between user equipment (UE) and NTN satellites, combined with reinforcement learning (RL), to dynamically adjust hysteresis, time-to-trigger (TTT), and handover decisions to better adapt to the mobility characteristics of NTN. Unlike the location-based handover (LBH) approach, LBH-PRL introduces adaptive parameter tuning based on environmental dynamics, significantly improving handover decision-making robustness and adaptability, thereby reducing unnecessary handovers. Simulation results demonstrate that the proposed LBH-PRL approach significantly outperforms conventional LBH and RBH mechanisms in key performance metrics, including reducing the average number of handovers, lowering the ping-pong rate, and minimizing the handover failure rate. These improvements highlight the effectiveness of LBH-PRL in enhancing handover efficiency and service continuity in NTN environments, providing a robust solution for intelligent mobility management in high-mobility NTN scenarios. Full article
(This article belongs to the Special Issue New Advances in Machine Learning and Its Applications)
Show Figures

Figure 1

42 pages, 2232 KiB  
Article
Federated Reinforcement Learning-Based Dynamic Resource Allocation and Task Scheduling in Edge for IoT Applications
by Saroj Mali, Feng Zeng, Deepak Adhikari, Inam Ullah, Mahmoud Ahmad Al-Khasawneh, Osama Alfarraj and Fahad Alblehai
Sensors 2025, 25(7), 2197; https://doi.org/10.3390/s25072197 - 30 Mar 2025
Cited by 1 | Viewed by 2021
Abstract
Using Google cluster traces, the research presents a task offloading algorithm and a hybrid forecasting model that unites Bidirectional Long Short-Term Memory (BiLSTM) with Gated Recurrent Unit (GRU) layers along an attention mechanism. This model predicts resource usage for flexible task scheduling in [...] Read more.
Using Google cluster traces, the research presents a task offloading algorithm and a hybrid forecasting model that unites Bidirectional Long Short-Term Memory (BiLSTM) with Gated Recurrent Unit (GRU) layers along an attention mechanism. This model predicts resource usage for flexible task scheduling in Internet of Things (IoT) applications based on edge computing. The suggested algorithm improves task distribution to boost performance and reduce energy consumption. The system’s design includes collecting data, fusing and preparing it for use, training models, and performing simulations with EdgeSimPy. Experimental outcomes show that the method we suggest is better than those used in best-fit, first-fit, and worst-fit basic algorithms. It maintains power stability usage among edge servers while surpassing old-fashioned heuristic techniques. Moreover, we also propose the Deep Deterministic Policy Gradient (D4PG) based on a Federated Learning algorithm for adjusting the participation of dynamic user equipment (UE) according to resource availability and data distribution. This algorithm is compared to DQN, DDQN, Dueling DQN, and Dueling DDQN models using Non-IID EMNIST, IID EMNIST datasets, and with the Crop Prediction dataset. Results indicate that the proposed D4PG method achieves superior performance, with an accuracy of 92.86% on the Crop Prediction dataset, outperforming alternative models. On the Non-IID EMNIST dataset, the proposed approach achieves an F1-score of 0.9192, demonstrating better efficiency and fairness in model updates while preserving privacy. Similarly, on the IID EMNIST dataset, the proposed D4PG model attains an F1-score of 0.82 and an accuracy of 82%, surpassing other Reinforcement Learning-based approaches. Additionally, for edge server power consumption, the hybrid offloading algorithm reduces fluctuations compared to existing methods, ensuring more stable energy usage across edge nodes. This corroborates that the proposed method can preserve privacy by handling issues related to fairness in model updates and improving efficiency better than state-of-the-art alternatives. Full article
(This article belongs to the Special Issue Securing E-Health Data Across IoMT and Wearable Sensor Networks)
Show Figures

Figure 1

18 pages, 708 KiB  
Article
Improved Connected-Mode Discontinuous Reception (C-DRX) Power Saving and Delay Reduction Using Ensemble-Based Traffic Prediction
by Ji-Hee Yu, Yoon-Ju Choi, Seung-Hwan Seo, Seong-Gyun Choi, Hye-Yoon Jeong, Ja-Eun Kim, Myung-Sun Baek, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2025, 13(6), 974; https://doi.org/10.3390/math13060974 - 15 Mar 2025
Cited by 1 | Viewed by 891
Abstract
This paper proposes a traffic prediction-based connected-mode discontinuous reception (C-DRX) approach to enhance energy efficiency and reduce data transmission delay in mobile communication systems. Traditional C-DRX determines user equipment (UE) activation based on a fixed timer cycle, which may not align with actual [...] Read more.
This paper proposes a traffic prediction-based connected-mode discontinuous reception (C-DRX) approach to enhance energy efficiency and reduce data transmission delay in mobile communication systems. Traditional C-DRX determines user equipment (UE) activation based on a fixed timer cycle, which may not align with actual traffic occurrences, leading to unnecessary activation and increased energy consumption or delays in data reception. To address this issue, this paper presents an ensemble model combining random forest (RF) and a temporal convolutional network (TCN) to predict traffic occurrences and adjust C-DRX activation timing. RF extracts traffic features, while TCN captures temporal dependencies in traffic data. The predictions from both models are combined to determine C-DRX activation timing. Additionally, the extended activation approach is introduced to refine activation timing by extending the activation window around predicted traffic occurrences. The proposed method is evaluated using real-world Netflix traffic data, achieving a 20.9% decrease in unnecessary active time and a 70.7% reduction in mean delay compared to the conventional periodic C-DRX approach. Overall, the proposed method significantly enhances energy efficiency and quality of service (QoS) in LTE and 5G networks, making it a viable solution for future mobile communication systems. Full article
(This article belongs to the Special Issue Advances in Mobile Network and Intelligent Communication)
Show Figures

Figure 1

Back to TopTop