Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (362)

Search Parameters:
Keywords = handovers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1725 KB  
Article
A Reinforcement Learning-Based Link State Optimization for Handover and Link Duration Performance Enhancement in Low Earth Orbit Satellite Networks
by Sihwa Jin, Doyeon Park, Sieun Kim, Jinho Lee and Inwhee Joe
Electronics 2026, 15(2), 398; https://doi.org/10.3390/electronics15020398 - 16 Jan 2026
Viewed by 136
Abstract
This study proposes a reinforcement learning-based link selection method for Low Earth Orbit satellite networks, aiming to reduce handover frequency while extending link duration under highly dynamic orbital environments. The proposed approach relies solely on basic satellite positional information, namely latitude, longitude, and [...] Read more.
This study proposes a reinforcement learning-based link selection method for Low Earth Orbit satellite networks, aiming to reduce handover frequency while extending link duration under highly dynamic orbital environments. The proposed approach relies solely on basic satellite positional information, namely latitude, longitude, and altitude, to construct compact state representations without requiring complex sensing or prediction mechanisms. Using relative satellite and terminal geometry, each state is represented as a vector consisting of azimuth, elevation, range, and direction difference. To validate the feasibility of policy learning under realistic conditions, a total of 871,105 orbit based data samples were generated through simulations of 300 LEO satellite orbits. The reinforcement learning environment was implemented using the OpenAI Gym framework, in which an agent selects an optimal communication target from a prefiltered set of candidate satellites at each time step. Three reinforcement learning algorithms, namely SARSA, Q-Learning, and Deep Q-Network, were evaluated under identical experimental conditions. Performance was assessed in terms of smoothed total reward per episode, average handover count, and average link duration. The results show that the Deep Q-Network-based approach achieves approximately 77.4% fewer handovers than SARSA and 49.9% fewer than Q-Learning, while providing the longest average link duration. These findings demonstrate that effective handover control can be achieved using lightweight state information and indicate the potential of deep reinforcement learning for future LEO satellite communication systems. Full article
Show Figures

Figure 1

24 pages, 1401 KB  
Article
A Comprehensive Analysis of Safety Failures in Autonomous Driving Using Hybrid Swiss Cheese and SHELL Approach
by Benedictus Rahardjo, Samuel Trinata Winnyarto, Firda Nur Rizkiani and Taufiq Maulana Firdaus
Future Transp. 2026, 6(1), 21; https://doi.org/10.3390/futuretransp6010021 - 15 Jan 2026
Viewed by 82
Abstract
The advancement of automated driving technologies offers potential safety and efficiency gains, yet safety remains the primary barrier to higher-level deployment. Failures in automated driving systems rarely result from a single technical malfunction. Instead, they emerge from coupled organizational, technical, human, and environmental [...] Read more.
The advancement of automated driving technologies offers potential safety and efficiency gains, yet safety remains the primary barrier to higher-level deployment. Failures in automated driving systems rarely result from a single technical malfunction. Instead, they emerge from coupled organizational, technical, human, and environmental factors, particularly in partial and conditional automation where human supervision and intervention remain critical. This study systematically identifies safety failures in automated driving systems and analyzes how they propagate across system layers and human–machine interactions. A qualitative case-based analytical approach is adopted by integrating the Swiss Cheese model and the SHELL model. The Swiss Cheese model is used to represent multilayer defensive structures, including governance and policy, perception, planning and decision-making, control and actuation, and human–machine interfaces. The SHELL model structures interaction failures between liveware and software, hardware, environment, and other liveware. The results reveal recurrent cross-layer failure pathways in which interface-level mismatches, such as low-salience alerts, sensor miscalibration, adverse environmental conditions, and inadequate handover communication, align with latent system weaknesses to produce unsafe outcomes. These findings demonstrate that autonomous driving safety failures are predominantly socio-technical in nature rather than purely technological. The proposed hybrid framework provides actionable insights for system designers, operators, and regulators by identifying critical intervention points for improving interface design, operational procedures, and policy-level safeguards in autonomous driving systems. Full article
Show Figures

Figure 1

17 pages, 3223 KB  
Article
Reinforcement Learning-Based Handover Algorithm for 5G/6G AI-RAN
by Ildar A. Safiullin, Ivan P. Ashaev, Alexey A. Korobkov, Artur K. Gaysin and Adel F. Nadeev
Inventions 2026, 11(1), 8; https://doi.org/10.3390/inventions11010008 - 10 Jan 2026
Viewed by 173
Abstract
The increasing number of Base Stations (BSs) and connected devices, coupled with their mobility, poses significant challenges and makes mobility management even more pressing. Therefore, advanced handover (HO) management technologies are required to address this issue. This paper focuses on the ping-pong HO [...] Read more.
The increasing number of Base Stations (BSs) and connected devices, coupled with their mobility, poses significant challenges and makes mobility management even more pressing. Therefore, advanced handover (HO) management technologies are required to address this issue. This paper focuses on the ping-pong HO problem. To address this issue, we propose an algorithm using Reinforcement Learning (RL) based on the Double Deep Q-Network (DDQN). The novelty of our approach is to assign specialized RL agents to users based on their mobility patterns. The use of specialized RL agents simplifies the learning process. The effectiveness of the proposed algorithm is demonstrated in tests on the ns-3 platform due to its ability to replicate real-world scenarios. To compare the results of the proposed approach, the baseline handover algorithm based on Events A2 and A4 is used. The results show that the proposed approach reduces the number of HO by more than four times on average, resulting in a more stable data rate and increasing it up to two times in the best case. Full article
Show Figures

Figure 1

44 pages, 2513 KB  
Review
On the Security of Cell-Free Massive MIMO Networks
by Hanaa Mohammed, Roayat I. Abdelfatah, Nancy Alshaer, Mohamed E. Nasr and Asmaa M. Saafan
Sensors 2026, 26(2), 353; https://doi.org/10.3390/s26020353 - 6 Jan 2026
Viewed by 294
Abstract
The rapid growth of wireless devices, the expansion of the Internet of Things, and the aggregate demand for Ultra-Reliable Low-Latency communications (URLLC) are driving the improvement of next-generation wireless systems. One promising emerging technology in this area is cell-free massive Multiple Input Multiple [...] Read more.
The rapid growth of wireless devices, the expansion of the Internet of Things, and the aggregate demand for Ultra-Reliable Low-Latency communications (URLLC) are driving the improvement of next-generation wireless systems. One promising emerging technology in this area is cell-free massive Multiple Input Multiple Output (maMIMO) networks. The distributed nature of Access Points presents unique security challenges that must be addressed to unlock their full potential. This paper studies the key security concerns in Cell Free Massive MIMO (CFMM) networks, including eavesdropping, Denial-of-Service attacks, jamming, pilot contamination, and methods for enhancing Physical Layer Security (PLS). We also provide an overview of security solutions specifically designed for CFMM networks and introduce a case study of a Reconfigurable Intelligent Surface (RIS)-aided secure scheme that jointly optimizes the RIS phase shifts with the artificial noise (AN) covariance under power constraints. The non-convex optimization problem is solved via the block coordinate descent (BCD) alternating optimization scheme. The combined RIS, AN, and beamforming configuration achieves a balanced trade-off between security and energy performance, resulting in moderate improvements over the individual schemes. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

18 pages, 3518 KB  
Article
A Scalable Solution for Node Mobility Problems in NDN-Based Massive LEO Constellations
by Miguel Rodríguez Pérez, Sergio Herrería Alonso, José Carlos López Ardao and Andrés Suárez González
Sensors 2026, 26(1), 309; https://doi.org/10.3390/s26010309 - 3 Jan 2026
Viewed by 335
Abstract
In recent years, there has been increasing investment in the deployment of massive commercial Low Earth Orbit (LEO) constellations to provide global Internet connectivity. These constellations, now equipped with inter-satellite links, can serve as low-latency Internet backbones, requiring LEO satellites to act not [...] Read more.
In recent years, there has been increasing investment in the deployment of massive commercial Low Earth Orbit (LEO) constellations to provide global Internet connectivity. These constellations, now equipped with inter-satellite links, can serve as low-latency Internet backbones, requiring LEO satellites to act not only as access nodes for ground stations, but also as in-orbit core routers. Due to their high velocity and the resulting frequent handovers of ground gateways, LEO networks highly stress mobility procedures at both the sender and receiver endpoints. On the other hand, a growing trend in networking is the use of technologies based on the Information Centric Networking (ICN) paradigm for servicing IoT networks and sensor networks in general, as its addressing, storage, and security mechanisms are usually a good match for IoT needs. Furthermore, ICN networks possess additional characteristics that are beneficial for the massive LEO scenario. For instance, the mobility of the receiver is helped by the inherent data-forwarding procedures in their architectures. However, the mobility of the senders remains an open problem. This paper proposes a comprehensive solution to the mobility problem for massive LEO constellations using the Named-Data Networking (NDN) architecture, as it is probably the most mature ICN proposal. Our solution includes a scalable method to relate content to ground gateways and a way to address traffic to the gateway that does not require cooperation from the network routing algorithm. Moreover, our solution works without requiring modifications to the actual NDN protocol itself, so it is easy to test and deploy. Our results indicate that, for long enough handover lengths, traffic losses are negligible even for ground stations with just one satellite in sight. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks: 3rd Edition)
Show Figures

Figure 1

21 pages, 1330 KB  
Article
A Clustering and Reinforcement Learning-Based Handover Strategy for LEO Satellite Networks in Power IoT Scenarios
by Jin Shao, Weidong Gao, Kuixing Liu, Rantong Qiao, Haizhi Yu, Kaisa Zhang, Xu Zhao and Junbao Duan
Electronics 2026, 15(1), 174; https://doi.org/10.3390/electronics15010174 - 30 Dec 2025
Viewed by 223
Abstract
Communication infrastructure in remote areas struggles to deliver stable, high-quality services for power systems. Low Earth Orbit (LEO) satellite networks offer an effective solution through their low latency and extensive coverage. Nevertheless, the high orbital velocity of LEO satellites combined with massive user [...] Read more.
Communication infrastructure in remote areas struggles to deliver stable, high-quality services for power systems. Low Earth Orbit (LEO) satellite networks offer an effective solution through their low latency and extensive coverage. Nevertheless, the high orbital velocity of LEO satellites combined with massive user access frequently leads to signaling congestion and degradation of service quality. To address these challenges, this paper proposes a LEO satellite handover strategy based on Quality of Service (QoS)-constrained K-Means clustering and Deep Q-Network (DQN) learning. The proposed framework first partitions users into groups via the K-Means algorithm and then imposes an intra-group QoS fairness constraint to refine clustering and designate a cluster head for each group. These cluster heads act as proxies that execute unified DQN-driven handover decisions on behalf of all group members, thereby enabling coordinated multi-user handover. Simulation results demonstrate that, compared with conventional handover schemes, the proposed strategy achieves an optimal balance between performance and signaling overhead, significantly enhances system scalability while ensuring long-term QoS gains, and provides an efficient solution for mobility management in future large-scale LEO satellite networks. Full article
Show Figures

Figure 1

30 pages, 1992 KB  
Article
Biomimetic Approach to Designing Trust-Based Robot-to-Human Object Handover in a Collaborative Assembly Task
by S. M. Mizanoor Rahman
Biomimetics 2026, 11(1), 14; https://doi.org/10.3390/biomimetics11010014 - 27 Dec 2025
Viewed by 379
Abstract
We presented a biomimetic approach to designing robot-to-human handover of objects in a collaborative assembly task. We developed a human–robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operations of a product in a flexible [...] Read more.
We presented a biomimetic approach to designing robot-to-human handover of objects in a collaborative assembly task. We developed a human–robot hybrid cell where a human and a robot collaborated with each other to perform the assembly operations of a product in a flexible manufacturing setup. Firstly, we investigated human psychology and biomechanics (kinetics and kinematics) for human-to-robot handover of an object in the human–robot collaborative set-up in three separate experimental conditions: (i) human possessed high trust in the robot, (ii) human possessed moderate trust in the robot, and (iii) human possessed low trust in the robot. The results showed that human psychology was significantly impacted by human trust in the robot, which also impacted the biomechanics of human-to-robot handover, i.e., human hand movement slowed down, the angle between human hand and robot arm increased (formed a braced handover configuration), and human grip forces increased if human trust in the robot decreased, and vice versa. Secondly, being inspired by those empirical results related to human psychology and biomechanics, we proposed a novel robot-to-human object handover mechanism (strategy). According to the novel handover mechanism, the robot varied its handover configurations and motions through kinematic redundancy with the aim of reducing potential impulse forces on the human body through the object during the handover when robot trust in the human was low. We implemented the proposed robot-to-human handover mechanism in the human–robot collaborative assembly task in the hybrid cell. The experimental evaluation results showed significant improvements in human–robot interaction (HRI) in terms of transparency, naturalness, engagement, cooperation, cognitive workload, and human trust in the robot, and in overall performance in terms of handover safety, handover success rate, and assembly efficiency. The results can help design and develop human–robot handover mechanisms for human–robot collaborative tasks in various applications such as industrial manufacturing and manipulation, medical surgery, warehouse, transport, logistics, construction, machine shops, goods delivery, etc. Full article
(This article belongs to the Special Issue Human-Inspired Grasp Control in Robotics 2025)
Show Figures

Figure 1

28 pages, 5719 KB  
Article
A Predictive-Reactive Learning Framework for Cellular-Connected UAV Handover in Urban Heterogeneous Networks
by Muhammad Abrar Afzal and Luis Alonso
Electronics 2026, 15(1), 109; https://doi.org/10.3390/electronics15010109 - 25 Dec 2025
Viewed by 418
Abstract
Unmanned aerial vehicles (UAVs) operating in dense urban environments often face link disruptions due to high mobility and interference. Reliable connectivity in such conditions requires advanced handover strategies. This paper presents a predictive-reactive Q-learning framework (PRQF) that optimizes handover decisions while sustaining throughput [...] Read more.
Unmanned aerial vehicles (UAVs) operating in dense urban environments often face link disruptions due to high mobility and interference. Reliable connectivity in such conditions requires advanced handover strategies. This paper presents a predictive-reactive Q-learning framework (PRQF) that optimizes handover decisions while sustaining throughput in dynamic heterogeneous urban networks. The framework combines an Extreme Gradient Boosting (XGBoost) classifier with a Q-learning agent through a probabilistic gating mechanism. UAVs follow a sinusoidal mobility model to ensure consistent and representative movement across experiments. Simulations using 3GPP-compliant Urban Macro (UMa) channel models in a 10 km × 10 km area show that PRQF achieves an average reduction of 84% in handovers at 100 km/h and 83% at 120 km/h, compared to the standard 3GPP A3 event-based handover method. PRQF also maintains a consistently high average throughput across all methods and speed scenarios. The results show better link stability and communication quality, demonstrating that the proposed framework is adaptable and scalable for reliable UAV communications in urban environments. Full article
Show Figures

Figure 1

23 pages, 1218 KB  
Article
Energy-Efficient End-to-End Optimization for UAV-Assisted IoT Data Collection and LEO Satellite Offloading in SAGIN
by Tie Liu, Chenhua Sun, Yasheng Zhang and Wenyu Sun
Electronics 2026, 15(1), 24; https://doi.org/10.3390/electronics15010024 - 21 Dec 2025
Viewed by 241
Abstract
The rapid advancement of low-Earth-orbit (LEO) satellite constellations and unmanned aerial vehicles (UAVs) has positioned space–air–ground integrated networks as a key enabler of large-scale IoT services. However, ensuring reliable end-to-end operation remains challenging due to heterogeneous IoT–UAV link conditions and rapidly varying satellite [...] Read more.
The rapid advancement of low-Earth-orbit (LEO) satellite constellations and unmanned aerial vehicles (UAVs) has positioned space–air–ground integrated networks as a key enabler of large-scale IoT services. However, ensuring reliable end-to-end operation remains challenging due to heterogeneous IoT–UAV link conditions and rapidly varying satellite visibility. This work proposes a two-stage optimization framework that jointly minimizes UAV energy consumption during IoT data acquisition and ensures stable UAV–LEO offloading through a demand-aware satellite association strategy. The first stage combines gradient-based refinement with combinatorial path optimization, while the second stage triggers handover only when the remaining offloading demand cannot be met. Simulation results show that the framework reduces UAV energy consumption by over 20% and shortens flight distance by more than 30% in dense deployments. For satellite offloading, the demand-aware strategy requires only 2–3 handovers—versus 7–9 under greedy selection—and lowers packet loss from 0.47–0.60% to 0.13–0.20%. By improving both stages simultaneously, the framework achieves consistent end-to-end performance gains across varying IoT densities and constellation sizes, demonstrating its practicality for future SAGIN deployments. Full article
Show Figures

Figure 1

27 pages, 519 KB  
Article
Dual-Algorithm Framework for Privacy-Preserving Task Scheduling Under Historical Inference Attacks
by Exiang Chen, Ayong Ye and Huina Deng
Computers 2025, 14(12), 558; https://doi.org/10.3390/computers14120558 - 16 Dec 2025
Viewed by 320
Abstract
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and [...] Read more.
Historical inference attacks pose a critical privacy threat in mobile edge computing (MEC), where adversaries exploit long-term task and location patterns to infer users’ sensitive information. To address this challenge, we propose a privacy-preserving task scheduling framework that adaptively balances privacy protection and system performance under dynamic vehicular environments. First, we introduce a dynamic privacy-aware adaptation mechanism that adjusts privacy levels in real time according to vehicle mobility and network dynamics. Second, we design a dual-algorithm framework composed of two complementary solutions: a Markov Approximation-Based Online Algorithm (MAOA) that achieves near-optimal scheduling with provable convergence, and a Privacy-Aware Deep Q-Network (PAT-DQN) algorithm that leverages deep reinforcement learning to enhance adaptability and long-term decision-making. Extensive simulations demonstrate that our proposed methods effectively mitigate privacy leakage while maintaining high task completion rates and low energy consumption. In particular, PAT-DQN achieves up to 14.2% lower privacy loss and 19% fewer handovers than MAOA in high-mobility scenarios, showing superior adaptability and convergence performance. Full article
Show Figures

Figure 1

26 pages, 4817 KB  
Article
ProcessGFM: A Domain-Specific Graph Pretraining Prototype for Predictive Process Monitoring
by Yikai Hu, Jian Lu, Xuhai Zhao, Yimeng Li, Zhen Tian and Zhiping Li
Mathematics 2025, 13(24), 3991; https://doi.org/10.3390/math13243991 - 15 Dec 2025
Viewed by 405
Abstract
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes [...] Read more.
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes and underuse the rich graph structure of event data. This paper introduces ProcessGFM, a domain-specific graph pretraining prototype for predictive process monitoring on event graphs. ProcessGFM employs a hierarchical graph neural architecture that jointly encodes event-level, case-level, and resource-level structure and is pretrained in a self-supervised manner on multiple benchmark logs using masked activity reconstruction, temporal order consistency, and pseudo-labelled outcome prediction. A multi-task prediction head and an adversarial domain alignment module adapt the pretrained backbone to downstream tasks and stabilise cross-log generalisation. On the BPI 2012, 2017, and 2019 logs, ProcessGFM improves next-activity accuracy by 2.7 to 4.5 percentage points over the best graph baseline, reaching up to 89.6% accuracy and 87.1% macro-F1. For remaining-time prediction, it attains mean absolute errors between 0.84 and 2.11 days, reducing error by 11.7% to 18.2% relative to the strongest graph baseline. For case-level risk prediction, it achieves area-under-the-curve scores between 0.907 and 0.934 and raises precision at 10% recall by 6.7 to 8.1 percentage points. Cross-log transfer experiments show that ProcessGFM retains between about 90% and 96% of its in-domain next-activity accuracy when applied zero-shot to a different log. Attention-based analysis highlights critical subgraphs that can be projected back to Petri net fragments, providing interpretable links between structural patterns, resource handovers, and late cases. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

39 pages, 1254 KB  
Review
Patient Participation During Nursing Bedside Handover: A State-of-the-Art Review
by Paulo Cruchinho, Gisela Teixeira, Pedro Lucas, Filomena Gaspar and María Dolores López-Franco
Nurs. Rep. 2025, 15(12), 438; https://doi.org/10.3390/nursrep15120438 - 10 Dec 2025
Viewed by 948
Abstract
Background: Patient participation during Nursing Bedside Handover (NBH) is a dyadic interaction between the patient and nurses that allows the patient to participate, either passively or actively, in communication activities and nursing care. Objective: This state-of-the-art (SotA) review aimed to synthesize current knowledge [...] Read more.
Background: Patient participation during Nursing Bedside Handover (NBH) is a dyadic interaction between the patient and nurses that allows the patient to participate, either passively or actively, in communication activities and nursing care. Objective: This state-of-the-art (SotA) review aimed to synthesize current knowledge on patient participation during NBH and identify future directions for bedside handover research. Methods: The literature search was conducted through PubMed, CINAHL Complete, and Scopus, and was supplemented by citation searching. Search was limited to peer-reviewed scientific articles using any empirical study design that addressed patient participation during NBH published in English by August 2025. The quality of the included studies was assessed using the Mixed Methods Appraisal Tool. Results: A total of 50 primary research articles were included and examined using the method of constant comparisons. The synthesized data were categorized into three main themes: (a) Domain of distinctive nature and attributes of patient participation during NBH; (b) domain of nurses’ practices and influencing factors of patient participation during NBH; and (c) domain of strategies and impacts of increasing patient participation during NBH. Within each domain, research trends were identified concerning patient participation in NBH. Future research directions are presented within each domain. Conclusions: The findings of this review may provide new insights into developing complex interventions aimed at increasing patient participation in NBH by nurses, namely with the use of co-design strategies, as well as the adoption of transfer protocols that incorporate informational and interactional components and assessment tools to measure patient participation in NBH. Full article
Show Figures

Figure 1

29 pages, 4247 KB  
Article
Zone-AGF: An O-RAN-Based Local Breakout and Handover Mechanism for Non-5G Capable Devices in Private 5G Networks
by Antoine Hitayezu, Jui-Tang Wang and Saffana Zyan Dini
Electronics 2025, 14(24), 4794; https://doi.org/10.3390/electronics14244794 - 5 Dec 2025
Viewed by 491
Abstract
The growing demand for ultra-reliable and low-latency communication (URLLC) in private 5G environments, such as smart campuses and industrial networks, has highlighted the limitations of conventional Wireline access gateway function (W-AGF) architectures that depend heavily on centralized 5G core (5GC) processing. This paper [...] Read more.
The growing demand for ultra-reliable and low-latency communication (URLLC) in private 5G environments, such as smart campuses and industrial networks, has highlighted the limitations of conventional Wireline access gateway function (W-AGF) architectures that depend heavily on centralized 5G core (5GC) processing. This paper introduces a novel Centralized Unit (CU)-based Zone-Access Gateway Function (Z-AGF) architecture designed to enhance handover performance and enable Local Breakout (LBO) within Non-Public Networks (NPNs) for non-5G capable (N5GC) devices. The proposed design integrates W-AGF functionalities with the Open Radio Access Network (O-RAN) framework, leveraging the F1 Application Protocol (F1AP) as the primary interface between Z-AGF and CU. By performing local breakout (LBO) locally at the Z-AGF, latency-sensitive traffic is processed closer to the edge, reducing the backhaul load and improving end-to-end latency, throughput, and jitter performance. The experimental results demonstrate that Z-AGF achieves up to 45.6% latency reduction, 69% packet loss improvement, 85.6% reduction of round-trip time (RTT) for local communications under LBO, effective local offloading with quantified throughput compared to conventional W-AGF implementations. This study provides a scalable and interoperable approach for integrating wireline and wireless domains, supporting low-latency, highly reliable services within the O-RAN ecosystem and accelerating the adoption of localized next-generation 5G services. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

24 pages, 1694 KB  
Systematic Review
Advanced Clustering for Mobile Network Optimization: A Systematic Literature Review
by Claude Mukatshung Nawej, Pius Adewale Owolawi and Tom Mmbasu Walingo
Sensors 2025, 25(23), 7370; https://doi.org/10.3390/s25237370 - 4 Dec 2025
Viewed by 555
Abstract
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and [...] Read more.
5G technology represents a transformative shift in mobile communications, delivering improved ultra-low latency, data throughput, and the capacity to support huge device connectivity, surpassing the capabilities of LTE systems. As global telecommunication operators shift toward widespread 5G implementation, ensuring optimal network performance and intelligent resource management has become increasingly obvious. To address these challenges, this study explored the role of advanced clustering methods in optimizing cellular networks under heterogeneous and dynamic conditions. A systematic literature review (SLR) was conducted by analyzing 40 peer-reviewed and non-peer-reviewed studies selected from an initial collection of 500 papers retrieved from the Semantic Scholar Open Research Corpus. This review examines a diversity of clustering approaches, including spectral clustering with Bayesian non-parametric models and K-means, density-based clustering such as DBSCAN, and deep representation-based methods like Differential Evolution Memetic Clustering (DEMC) and Domain Adaptive Neighborhood Clustering via Entropy Optimization (DANCE). Key performance outcomes reported across studies include anomaly detection accuracy of up to 98.8%, delivery rate improvements of up to 89.4%, and handover prediction accuracy improvements of approximately 43%, particularly when clustering techniques are combined with machine learning models. In addition to summarizing their effectiveness, this review highlights methodological trends in clustering parameters, mechanisms, experimental setups, and quality metrics. The findings suggest that advanced clustering models play a crucial role in intelligent spectrum sensing, adaptive mobility management, and efficient resource allocation, thereby contributing meaningfully to the development of intelligent 5G/6G mobile network infrastructures. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

21 pages, 15262 KB  
Article
An Air-to-Ground Visual Target Persistent Tracking Framework for Swarm Drones
by Yong Xu, Shuai Guo, Hongtao Yan, An Wang, Yue Ma, Tian Yao and Hongchuan Song
Automation 2025, 6(4), 81; https://doi.org/10.3390/automation6040081 - 2 Dec 2025
Viewed by 483
Abstract
Air-to-ground visual target persistent tracking technology for swarm drones, as a crucial interdisciplinary research area integrating computer vision, autonomous systems, and swarm collaboration, has gained increasing prominence in anti-terrorism operations, disaster relief, and other emergency response applications. While recent advancements have predominantly concentrated [...] Read more.
Air-to-ground visual target persistent tracking technology for swarm drones, as a crucial interdisciplinary research area integrating computer vision, autonomous systems, and swarm collaboration, has gained increasing prominence in anti-terrorism operations, disaster relief, and other emergency response applications. While recent advancements have predominantly concentrated on improving long-term visual tracking through image algorithmic optimizations, insufficient exploration has been conducted on developing system-level persistent tracking architectures, leading to a high target loss rate and limited tracking endurance in complex scenarios. This paper designs an asynchronous multi-task parallel architecture for drone-based long-term tracking in air-to-ground scenarios, and improves the persistent tracking capability from three levels. At the image algorithm level, a long-term tracking system is constructed by integrating existing object detection YOLOv10, multi-object tracking DeepSort, and single-object tracking ECO algorithms. By leveraging their complementary strengths, the system enhances the performance of the detection and multi-object tracking while mitigating model drift in single-object tracking. At the drone system level, ground target absolute localization and geolocation-based drone spiral tracking strategies are conducted to improve target reacquisition rates after tracking loss. At the swarm collaboration level, an autonomous task allocation algorithm and relay tracking handover protocol are proposed, further enhancing the long-term tracking capability of swarm drones while boosting their autonomy. Finally, a practical swarm drone system for persistent air-to-ground visual tracking is developed and validated through extensive flight experiments under diverse scenarios. Results demonstrate the feasibility and robustness of the proposed persistent tracking framework and its adaptability to wild real-world applications. Full article
Show Figures

Figure 1

Back to TopTop