Next Article in Journal
Pulley-Based Flapping Propulsion for Hoverable Micro Air Vehicles: A Critical Review, Comparative Metrics and Roadmap
Previous Article in Journal
Simulation-Based Airspace Accessibility Analysis for Integrating Regional Unmanned Aircraft Systems into Non-Towered Airport Traffic Patterns
Previous Article in Special Issue
Intelligent Control for Quadrotors Based on a Novel Method: TD3-ADRC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience

by
Ke Li
1,
Kun Zhang
1,*,
Ziqi Wei
2,
Haiyin Piao
3,
Binlin Yuan
4,
Boxuan Wang
1 and
Jiangbo Cheng
1
1
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
2
Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
3
Shenyang Aircraft Design and Research Institute, Shenyang 110035, China
4
Sichuan Tengden Sci-Tech Innovation Co., Ltd., Chengdu 610037, China
*
Author to whom correspondence should be addressed.
Drones 2026, 10(2), 142; https://doi.org/10.3390/drones10020142
Submission received: 15 January 2026 / Revised: 6 February 2026 / Accepted: 9 February 2026 / Published: 17 February 2026
(This article belongs to the Special Issue Advances in AI Large Models for Unmanned Aerial Vehicles)

Highlights

What are the main findings?
  • Considering the partially observable nature of environments and the ensuing limited information for LRGDEs, we constructed Bi-LSTM-modified Policy Networks (BLPNs) to realize UAV-maneuvering decision-making, and this was modeled using POMDPs.
  • To maximize the latent utility of insufficient transitions with sparse reward, we developed the Adaptive Multi-Feature Evaluation Experience Replay (AMFER) method, integrating expert experience and domain knowledge, to reshape the sampling process from historical data.
What are the implications of the main findings?
  • The REDCRL, a novel deep reinforcement learning algorithm integrating BLPN and AMFER, is proposed to solve UAV-maneuvering decision-making problems experienced when using LRGDEs and overcomes the partial observability property of LRGDEs.
  • The REDCRL significantly accelerates the convergence of policy and enhances the performance of converged policy while training policy in complex environments with limited experience and sparse reward.

Abstract

Autonomous flight is a critical capability for unmanned aerial vehicles (UAVs), enabling applications in wildlife and plant protection, infrastructure inspection, search and rescue, and other complex missions. Although some learning-based methods have achieved considerable progress, traditional algorithms still struggle with real-world challenges, due to the partially observable nature of environments and limited experience regarding the properties of dynamic unknown environments where threats and targets are movable and unpredictable. To address these difficulties, it is necessary to achieve autonomous guidance for UAVs performing long-range missions in dynamic environments (LRGDEs), and to develop a novel end-to-end algorithm that can overcome partial observability under limited state transitions. In this paper, we propose an RNN-enhanced Diverse Curriculum-driven Learning Algorithm (REDCRL) based on deep reinforcement learning. We modify the structure of traditional actor–critic networks and introduce Bi-LSTM into policy networks (referred to as Bi-LSTM-modified Policy Networks (BLPNs)) to alleviate observation incompleteness. Furthermore, to fully exploit the potential value of data and mitigate the problem of insufficient samples, we develop an Adaptive Multi-Feature Evaluation Experience Replay (AMFER) method to reshape the process of experience replay buffer construction and sampling. In addition, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm is adopted to optimize UAV-maneuver decision policies. Compared with traditional algorithms, the proposed algorithm can accelerate policy convergence and improve the performance of the trained policy.

1. Introduction

With the rapid development of UAV technology, UAVs have been widely adopted to assist humans in performing mundane, repetitive, and hazardous tasks, owing to their low cost, zero crew risk, high flexibility, and ease of upgrading [1]. For example, UAVs have been widely applied in aerial surveillance, aerial mapping, agricultural monitoring, environmental monitoring, logistics, delivery, search and rescue, communication relay, and other missions across diverse engineering fields [2,3,4]. Among these applications, a critical engineering challenge for UAVs, termed long-range UAV Guidance in Dynamic Environments (LRGDEs), demands urgent resolution as it plays a pivotal role in enabling efficient task execution. During LRGDE missions, a UAV is required to depart from a starting point, navigate to a target destination, and avoid obstacles or threats using airborne sensor data. These obstacles and threats include both mobile and stationary objects. Furthermore, the problem becomes significantly more complex when the target point is rigidly attached to a moving vehicle. Figure 1 illustrates the process of a UAV performing an LRGDE mission. To address this problem, researchers have achieved notable progress toward the challenge described above.
The traditional solution for LRGDEs is to employ path-planning algorithms to obtain the optimal path from the start position to the target position [5] and to design a controller using trajectory-tracking algorithms to guide the UAV and follow the predefined path. To date, various path-planning algorithms have been proposed to generate optimal paths, such as visibility graphs [6]; randomly sampling search algorithms including rapidly exploring random tree [7]; probabilistic roadmaps [8]; heuristic algorithms including a-star [9], sparse a-star [10], and d-star [11]; biologically inspired optimization algorithms including genetic algorithms [12] and sand cat swarm optimization [13]; and so on. Subsequently, various trajectory-tracking algorithms have been developed to design controllers that guide UAVs to track the optimal path [14]. Although such solutions can effectively address the aforementioned problem, they suffer from inherent drawbacks and practical limitations. First, obtaining detailed environmental obstacle information (e.g., mountains, no-fly zones, and other threats) is highly challenging, which inherently limits the applicability of path-planning algorithms. Second, the above scheme lacks sufficient flexibility to adjust paths in real time when unexpected threats arise. Third, traditional path-planning algorithms struggle to generate feasible trajectories in the presence of moving targets. Fourth, conventional path-planning algorithms typically require excessive computation time to derive optimal solutions, making them unsuitable for real-time applications. Therefore, to achieve high-quality LRGDEs, it is essential to endow UAVs with fully autonomous flight capability, in which the UAV’s control variables are computed in real time by the algorithm, and the decision-making foundation consists of the UAV’s flight state, observed environmental information from airborne sensors, and predefined mission objectives. To realize autonomous flight for UAVs, a critical engineering problem must be solved—namely, UAV-maneuvering decision-making in LRGDEs—where the UAV’s control variables must be determined periodically based on all available information and the scheduled mission objective. Due to the Markovian nature of UAV-maneuvering decision-making in LRGDEs, some researchers have modeled this problem using Markov Decision Processes (MDPs) and proposed various algorithms based on reinforcement learning (RL) [15,16,17,18]. When applied to the UAV-maneuvering decision-making problem in LRGDEs, RL algorithms can reduce reliance on prior environmental information due to their model-free nature. Meanwhile, since deep reinforcement learning (DRL) algorithms inherently possess end-to-end characteristics, policies can be generated rapidly based on real-time observations of the surrounding environment, enabling the algorithms to address challenges posed by unknown or unexpected threats and satisfy strict real-time decision-making requirements.
When applying RL to solve the UAV-maneuvering decision-making problem in LRGDEs, a key challenge remains that limits RL performance in LRGDE tasks. During policy training for LRGDEs, the decision-making basis at each step mainly relies on observations from airborne sensors, such as radar, electro-optical sensors, and other active or passive sensors. For passive sensors such as optical sensors, only high-precision relative angular information of the target can be obtained, which leads to severe information dimensionality reduction. Even if the relative distance of the target can be inferred through algorithms, data at such a precision level is hardly sufficient to support decision-making. For active sensors such as radar, the relative distance of the target can be acquired, but high-precision velocity information cannot be obtained simultaneously due to the inherent Doppler ambiguity of radar. Thus, owing to incomplete state information from airborne sensors in LRGDEs, to apply RL to LRGDEs, the UAV-maneuvering decision-making problem in LRGDEs should be modeled based on Partially Observable Markov Decision Processes (POMDPs) rather than MDPs. One of the most well-known algorithms is the Monte Carlo Tree Search (MCTS) algorithm, which serves as the core technique of AlphaZero developed by Google DeepMind [19]. Based on MCTS, several researchers have proposed various MCTS variants for POMDPs, such as POMCP [20], DESPOT [21], and other improved MCTS-based algorithms. Although these modified MCTS-based algorithms show effectiveness in solving POMDPs, their excessive computational resource consumption and high time intensity significantly hinder practical engineering applications, since the belief state distribution is estimated using massive samples via particle filters, and the optimal solution is computed through planning methods. In addition, another category of modified algorithms based on DRL leverages the advantages of Recurrent Neural Networks (RNNs) in processing time-series data and estimates the missing dimensions in observation information. Since the problem to be solved is not highly complex, observations such as images, signals, and other forms of information are stacked along the time dimension, and temporal sequences are used to support policy decision-making; this approach is adopted in the input processing of DQN [22], DDPG [23], TD3 [24], PPO [25], SAC [26], and other DRL algorithms. As the number of missing dimensions increases, problems must be modeled as POMDPs, and such simple tricks can no longer solve the problem effectively. Therefore, several researchers have proposed various modified DRL algorithms based on RNNs, such as G-IPOMDP-PPO [27], CBC-TP Net [28], Fast-RDPG [29], and FRSVG(0) [30], among others. All the aforementioned methods introduce RNNs to improve traditional DRL algorithms for solving POMDPs, where RNNs are used to eliminate the impact caused by insufficient state information dimensions. The modified DRL algorithms can overcome the drawbacks of MCTS-based algorithms and satisfy real-time decision-making requirements. Therefore, to better address the UAV-maneuvering decision-making problem in LRGDEs, it is crucial to model LRGDEs based on POMDPs and improve policy networks using RNNs.
Aside from the partial observability in LRGDEs, two additional challenges arise: limited experience and sparse reward. Owing to the complex dynamics of LRGDEs, each simulation episode is time-consuming. Compared with standard MDPs, fewer state transitions are generated, resulting in limited experience that impedes policy convergence. Meanwhile, obtaining effective transitions with clear rewards requires extensive interactions between the agent and the environment, leading to slow convergence and unsatisfactory performance. This well-known challenge in policy training is referred to as sparse reward. To address these issues, previous studies have achieved promising results using methods such as uniform experience replay (UER) [31] and prioritized experience replay (PER) [32]. When applying UER and PER, before policy training using historical data, it is necessary to determine the sampling probability of each transition and sample a batch of transitions accordingly. This procedure is known as transition selection, in which transitions are evaluated from multiple perspectives. Based on UER and PER, various improved experience replay (ER) methods have been proposed, such as HER [33], DCRL [34], ERO [35], CHER [36], ACER [37], among others. These methods differ from traditional ER in their transition evaluation strategies. Although such modified algorithms enhance the sampling efficiency and overall performance of agents during policy training, relying on only a single criterion to evaluate sample priority, such as TD-error, mission goal, cumulative reward, or sample diversity, is insufficient. To improve the sampling efficiency of RL, it is essential to evaluate transitions using multiple features. In addition to transition selection, adjusting the agent’s training process is critical for mitigating limited experience and sparse reward in MDPs. Operating at a higher level than transition selection, reshaping the transitions generated during policy training is beneficial; this is referred to as transition modification. Instead of directly modifying transitions, the timing of experience collection can be adjusted by designing intermediate task milestones, i.e., learning task objectives from simple to difficult. Accordingly, several researchers have structured the policy training process using task curricula and proposed corresponding algorithms, such as PCCL [38], NavACL [39], CURROT [40], CDRL [41], SCG [42], and other curriculum-based DRL algorithms [43]. These studies demonstrate that deliberate scheduling of the training process can significantly improve the sampling efficiency of RL and accelerate policy convergence. For both ER and the training process of RL algorithms, curriculum learning (CL), which has been widely discussed above, provides valuable guidance for our work. Inspired by human education and animal training, researchers in deep learning (DL) initially proposed CL to accelerate neural network training [44]. Subsequently, CL has been adopted to enhance traditional RL algorithms with the goal of accelerating policy convergence [45]. Although many algorithms have been developed to improve transition modification, manually adjusting task goals remains inadequate. Focusing on LRGDEs, this study aims to overcome the limitations of traditional methods caused by limited experience and sparse reward: (1) reliance on a single transition evaluation criterion, (2) inflexible and manually designed training processes. By integrating advances in both transition selection and transition modification, we seek to improve the ER mechanism while reshaping the agent’s training process based on curriculum learning.
To effectively execute LRGDE missions, this work addresses the aforementioned challenges and focuses on UAV-maneuvering decision-making for LRGDEs. The main contributions of this paper are summarized as follows:
(a)
To tackle the partial observability challenge in LRGDEs, we redesign the structure of traditional actor–critic networks and construct a Bi-LSTM-based Policy Network (BLPN). During decision-making, historical observation sequences are vectorized and fed into the BLPN to mitigate the curse of dimensionality in LRGDEs.
(b)
To fully exploit the latent value of limited transitions under sparse rewards, we propose an Adaptive Multi-Feature Evaluation Experience Replay (AMFER) method. This method integrates an adaptive dynamic termination (ADT) mechanism and a multi-feature transition evaluation (MFTE) model to reshape the policy training process, fully unlocking the potential value of data via a “from easy to difficult” learning paradigm.
(c)
We further propose the REDCRL algorithm, which integrates the proposed BLPN and AMFER. The effectiveness of REDCRL is verified through extensive simulation experiments and comparisons with conventional DRL algorithms. Experimental results and analyses demonstrate that REDCRL significantly accelerates policy convergence and improves the performance of the trained policies.

2. Related Work

In this section, we review the SOTA on DRL for UAVs, RL for POMDPs, and CL for RL.

2.1. DRL for UAVs

In 2015, DeepMind proposed the Deep Q-Network (DQN) algorithm in Nature, which achieved performance comparable to top human players in Atari games [22]. Owing to its outstanding performance and end-to-end characteristics, it has attracted researchers from various disciplines to explore its engineering applications. Meanwhile, the Deep Deterministic Policy Gradient (DDPG) algorithm was introduced to address the dimensional explosion caused by continuous action and state spaces [23]. Furthermore, to mitigate the overestimation bias inherent in DQN and DDPG training, the TD3 algorithm was developed. It integrates clipped double-Q learning, target policy smoothing, and delayed policy update to alleviate overestimation bias arising from the Bellman equation and function approximation [24].
Based on the aforementioned algorithmic breakthroughs, various DRL methods have been applied across diverse research fields, particularly to UAV-maneuvering decision-making problems. Zhang et al. proposed a UAV autonomous maneuver decision-making algorithm for route guidance based on double Q-learning with prioritized experience replay. They formulated the UAV-maneuver decision-making problem in the route guidance task and established a corresponding decision-making model based on MDPs [16]. Li et al. presented a DDPG-based UAV-maneuver decision-making algorithm with PER for autonomous airdrop missions. They defined two key problems, namely the turn-round problem and the guidance problem, in autonomous airdrop scenarios and constructed the corresponding decision-making model [15]. Zijian et al. introduced a novel double-screening sampling strategy and developed the Relevant Experience Learning DDPG (REL-DDPG) algorithm inspired by human learning, which further enhances the influence of the learning process on action selection in the current state [44]. Kaifang et al. investigated the motion control problem of UAVs navigating autonomously through uncertain environments. They proposed a DRL-based motion control method that integrates two difference-amplifying strategies with traditional DRL algorithms, employing an improved Lyapunov guidance vector field approach to track waypoints generated by the DRL framework [18]. Hu et al. developed an advanced DRL method for safe autonomous motion control in complex unknown environments. They introduced asynchronous curriculum experience replay to overcome the limitations of PER, which uses multithreading to update priorities asynchronously and assigns reasonable priorities to improve experience diversity [37].
Based on the above review, DRL can be employed to address UAV-maneuvering decision-making in LRGDEs and overcome the drawbacks of traditional methods such as path planning and trajectory tracking. However, conventional DRL algorithms still struggle to effectively handle the partial state observability challenge in LRGDEs. In this paper, we enhance traditional DRL algorithms to better utilize partially observable state information in LRGDEs.

2.2. RL for POMDPs

In 2018, Google DeepMind proposed AlphaZero, an intelligent agent capable of playing Chess, Go, and Shogi [19]. AlphaZero integrates deep neural networks (DNNs) with MCTS, enabling it to learn from scratch without relying on human expertise beyond basic rules and ultimately achieving superhuman performance. Silver et al. proposed Partially Observable Monte Carlo Planning (POMCP) [20], which features two key components: Monte Carlo sampling and a black-box simulator. POMCP is the first general-purpose planner that achieves high performance in large, unfactored POMDPs. Somani et al. introduced the Regularized Determinized Sparse Partially Observable Tree (R-DESPOT) [21], which employs randomized scenario sampling and regularization to balance policy value and complexity within a compact tree structure. This method alleviates the curse of dimensionality and enables efficient real-time decision-making in high-dimensional POMDPs. Zheng et al. proposed PPO for POMDP with Guidelines under Dense Reward (G-IPOMDP-PPO) [27], which incorporates image-state observations and a dense reward function. This method mitigates decision-making uncertainty and inefficiency in complex uncertain environments and solves multi-ship collision-avoidance problems. Zhang et al. developed vectorized actor–critic networks based on the Coronally Bidirectionally Coordinated with Target Prediction Network (CBC-TP Net) [28]. This approach integrates a vectorized extension of MADDPG for policy training, ensuring robust performance in both normal and anti-damage scenarios for multi-UAV pursuit–evasion games in obstacle-rich environments. Wang et al. proposed Fast-RDPG within an actor–critic framework. They constructed actor and critic networks using RNNs and adopted online DRL to map sensory observations to UAV control commands. This method outperforms state-of-the-art (SOTA) approaches in large-scale complex environments where navigation is modeled as POMDPs [29]. Xue et al. introduced a DRL algorithm named FRSVG(0) which embeds RNNs to address partial observability. This method achieves safe UAV navigation in large-scale, complex unknown environments and yields improved policy performance over the RSVG(0) algorithm [30].
As summarized above, both MCTS-based algorithms and RNN-augmented DRL methods can solve POMDPs. However, MCTS-based algorithms suffer from excessive computational resource consumption and high time complexity, which severely limit their practical engineering applications, since belief state distributions are estimated via particle filters using massive samples, and optimal solutions are derived through planning procedures. Compared with MCTS-based methods, RNN-augmented DRL algorithms exploit the strengths of RNNs in modeling time-series data and estimating missing dimensions in observation information. These approaches mitigate the adverse effects of incomplete state information, avoid the drawbacks of MCTS-based methods, and satisfy real-time decision-making requirements. Therefore, we formulate the mathematical model for the UAV-maneuvering decision-making problem in LRGDEs and propose BLPN to address partial state observability in LRGDEs.

2.3. CL for RL

During policy training with DRL algorithms, balancing exploration and exploitation is critical for effective policy learning [31]. Exploration enables algorithms to discover new samples and acquire knowledge about the target problem. In contrast, exploitation allows algorithms to refine policies using historical transitions stored in the experience memory. If the proportion of exploration exceeds that of exploitation during training, policy convergence will slow down. Conversely, insufficient exploration may lead to the loss of valuable samples and cause the policy to become trapped in a local optimum. Therefore, balancing exploration and exploitation remains essential for DRL-based policy training. When AlphaGo gained worldwide attention, the significance of experience replay was recognized, and the PER method was proposed [32]. In PER, each transition in the experience memory is assigned a TD-error. Transition priorities are determined according to their TD-errors, and the sampling probability of each transition is based on its priority. As defined by PER, the core of experience replay methods lies in the calculation of sample priorities and the design of transition sampling strategies.
In human education, a structured curriculum is typically designed to guide learners through a sequential knowledge acquisition process. It starts with the introduction of foundational concepts, such as basic arithmetic or language skills. As learners advance, more complex concepts and skills are gradually introduced, building on previously acquired knowledge. This hierarchical approach enables a more systematic and efficient learning experience. Similarly, animal training often follows a step-by-step procedure. Trainers begin with simple behaviors or tasks that the animal can easily understand and perform. For example, teaching a dog to sit or a bird to perch. As the animal masters these basic tasks, more complex behaviors are introduced, gradually shaping its ability to perform sophisticated actions. Accordingly, some deep learning researchers proposed an algorithm to accelerate neural network training, termed CL [45]. This gradual learning paradigm inspired by CL demonstrates that a “from easy to difficult” strategy is essential for policy training, helping algorithms to fully absorb knowledge embedded in samples. Some RL researchers have also attempted to employ CL to restructure the policy training process, aiming to improve policy convergence speed [46].
In general, existing approaches for improving RL with CL mainly follow these principles: (1) designing curriculum sequences based on task objectives; (2) developing dynamic goals tailored to task characteristics; (3) creating a time-varying initial state space; (4) implementing task-specific transition sampling strategies. Luo et al. proposed a precision-based continuous curriculum learning (PCCL) method, which employs a decay function to dynamically adjust precision requirements during training, thereby enhancing training efficiency and performance, especially in sparse reward scenarios [38]. Klink et al. introduced a curriculum reinforcement learning (CRL) method named CURROT, which formulates curriculum generation as a constrained optimal transport problem and interpolates between task distributions using the Wasserstein distance, thus improving performance across various tasks [40]. Ma et al. presented a curriculum-based deep reinforcement learning (CDRL) approach for quantum control. This method constructs curricula with tasks defined by fidelity thresholds, enabling agents to learn in an easy-to-difficult order and transfer knowledge across tasks [41]. Xiao et al. proposed an end-to-end deep reinforcement learning framework augmented with curriculum learning and a novel Sim2Real transfer method. This framework enables quadrotors to navigate through narrow gaps by dividing training into two phases with gradually shrinking gap sizes, alleviating reward sparsity and the simulation-to-reality gap [43]. Hu et al. developed an asynchronous curriculum experience replay (ACER) framework that uses multithreaded asynchronous priority updates, a temporary experience pool, and a first-in-useless-out (FIUO) mechanism to enhance UAV autonomous motion control in unknown dynamic environments, achieving faster convergence and better final performance compared with TD3 [37].
As summarized above, these approaches can be categorized into transition selection methods and transition modification methods. Although transition selection methods achieve diversified experience replay by computing the sampling probability of each transition, and transition modification methods reshape the RL policy training process by generating subtask goals from a “from easy to difficult” perspective, relying on only a single criterion to evaluate sample priority and merely manually adjusting task goals remains inadequate. Therefore, this work aims to improve ER mechanism of RL algorithms while restructuring the agent’s training process based on CL, by comprehensively integrating advances in both transition selection and transition modification.

3. Problem Formulation

In this section, we present the definition of LRGDEs and derive the UAV-maneuvering decision-making problem from it. Thereby, we establish the UAV-maneuvering decision-making model for LRGDEs based on POMDPs.

3.1. Description of LRGDE

As mentioned above, we aim to address the UAV-maneuvering decision-making problem in LRGDE missions, where the UAV is required to fly from a starting point to a specific area and detect a moving target within that area, laying the foundation for subsequent targeting. In addition, the UAV must also be guided to avoid obstacles and threats within the area. Furthermore, we formulate the mathematical model for LRGDEs in detail.
As shown in Figure 2, the UAV departs from a random starting point, with the objective of flying around the moving target while avoiding real-time detected threats. In this paper, the flight state of UAV could be defined by position X UAV , azimuth ψ UAV and velocity V UAV . Each surrounding i -th threat is characterized by position X thr i , velocity V thr i and influence radius R thr i . Threats typically include mountains, no-fly zones, mobile warning radars, patrol squads, and other movable or stationary entities. The target to be detected is defined by position X tgt , velocity V tgt and detection radius R det . The relative distance vector between the UAV and the target is D LOS , and the relative azimuth is ψ LOS .
Moreover, the UAV can observe threats in its surroundings via airborne sensors, such as radar, LiDAR, and the Infrared Search and Track System (IRST), among others. Figure 3 illustrates the perception process of the UAV observing threats in its surroundings. In practice, only the threats in the forward direction of the UAV will affect the execution of specific tasks. Therefore, we consider incorporating the threats within the front hemisphere of the UAV’s forward direction into the state space, and the set of threats observable by the UAV is defined as W thr . Furthermore, the set W thr is divided into W thr l , W thr f and W thr r , based on the range of δ ψ thr . The δ ψ thr values of W thr l , W thr f and W thr r belong to the ranges of 30 , 90 , 30 , 30 and 90 , 30 . Subsequently, we can obtain the nearest threats on the left front side, directly in front, and on the right front side, which are used to support the formulation of rational decisions.
In addition, we define the failed termination condition as
X UAV X thr i R thrc i , i = 1 , 2 , , N thr
where N thr indicates the number of threats and R thrc i indicates the cutoff radius of the threat. Equation (1) shows that if UAV flies into the cutoff area of any threat, the mission will be failed. In addition, we define the successful condition of LRGDEs as
X UAV X tgt R det
which means the UAV finishes mission successfully when the target enters the UAV’s detection range R det .

3.2. UAV-Maneuvering Decision-Making Model for LRGDE Based on POMDPs

In this section, we model the UAV-maneuvering decision-making problem for LRGDEs using POMDPs, and define the state space, observation space, action space, and reward function as follows.

3.2.1. State Space, Observation Space and Action Space

(a)
The definition of state space
Considering the definition of LRGDEs, the state space is defined as
S = v UAV , h UAV , n z , d LOS , δ ψ LOS , v tgt , ψ tgt , d thr f , δ ψ thr f , v thr f , ψ thr f , d thr l , δ ψ thr l , v thr l , ψ thr l , d thr r , δ ψ thr r , v thr r , ψ thr r
where v UAV , h UAV and n z are the UAV’s speed, height and steering overload. d LOS and δ ψ LOS represent the distance and azimuth of required placement area with respect to the UAV. v tgt and ψ tgt represent the velocity and azimuth of the target. d thr f and δ ψ thr f indicate the distance and azimuth of threat in front with respect to the UAV. d thr l and δ ψ thr l are the distance and azimuth of threat on the left with respect to the UAV. d thr r and δ ψ thr r are the distance and azimuth of threat on the right with respect to the UAV.
(b)
The definition of observation space
Because of the existence of airborne sensors in the UAV, the observation space is defined as Equation (4) based on the definitions of state space and perception process in LRGDEs.
O = v UAV , h UAV , n z , d LOS , δ ψ LOS , d thr f , δ ψ thr f , d thr l , δ ψ thr l , d thr r , δ ψ thr r
In contrast to state space, the velocity and azimuth of the target and all of the threats in the observation space is not present. The curse of dimensionality of observation space poses significant challenges to decision-making.
(c)
The definition of action space
Based on the UAV kinematic model involved in the simulation environment for LRGDEs, we could establish the action space as
A s = n z
where n z is the steering overload of the UAV.

3.2.2. Knowledge-Enhanced Reward Model

Based on the definitions of various termination conditions for LRGDEs, the basic reward function for LRGDEs is defined as
R b s , a = 10.0 Successful   Termination 1.0 Takeover   Termination 1.0 Failed   Termination 0.0 Otherwise
To address the issue of sparse rewards and accelerate policy convergence, we design a shaped reward function based on the reward shaping (RS) method by incorporating expert experience and domain knowledge. In this paper, the shaped reward function consists of a potential-based reward and an event-based reward, as defined in Equation (7).
F K s , a , s = F P s , a , s + F E s , a , s         = i P i s , a , s + j e j s , a , s
where F P s , a , s represents the potential-based reward, F E s , a , s indicates the event-based reward, P i s , a , s is the differential potential function of each factor, and e j s indicates the event-based function of j-th event, which is defined as Equation (8).
e j s = r j e t
where t is the elapsed period since the event occurred at present, r j indicates the immediate reward of the j-th event. For example, we define an event-based function e ψ LOS s = r ψ LOS e t , whose event is that the relative azimuth of target point with respect to the UAV is less than the maximum azimuth range of airborne sensors. When the event occurs firstly, agent could receive r ψ LOS . Then, the agent will obtain r ψ LOS e in the next decision-making period.
As shown in Equation (9), the differential potential function consists of the distance factor function, the azimuth factor function and the threats factor function.
F P s , a , s = γ Φ d LOS s + Φ ψ LOS s + i = 0 2 Φ thr i s Φ d LOS s + Φ ψ LOS s + i = 0 2 Φ thr i s
where Φ d LOS ( s ) indicates the potential function about distance between UAV and target, and Φ ψ LOS s indicates the potential function about relative azimuth of target with respect to the UAV, and Φ thr i s , i = 0 , 1 , 2 represents the potential functions about the influence of the front threat, left threat and right threat, respectively. Φ d LOS ( s ) is defined as
Φ d LOS ( s ) = 1 2 f norm d LOS 2
where f norm x indicates the normalization function of parameter x . It is constructed as
f norm x = x max x x max x min
where x belongs to the interval x min , x max . Moreover, Φ ψ LOS s is defined as
Φ ψ LOS s = 1 2 f norm δ ψ LOS 2
And Φ thr i s , i = 0 , 1 , 2 could be defined as
Φ thr i ( s ) = 1 2 f norm d thr i 2
where d thr i , i = 0 , 1 , 2 represents d thr f , d thr l and d thr r , respectively.
On the other hand, we define event-based reward functions based on important events, whose hyperparameters are defined in Table 1. The appearance condition indicates the conditions for determining whether it is effective, and r j denotes the immediate reward. For LRGDEs, we define six events to construct the event-based reward function.

4. RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm

In this section, we present the structure of the REDCRL. First, we propose BLPN to address the UAV-maneuvering decision-making problem for LRGDEs. Second, we propose AMFER, which integrates an ADT mechanism and an MFTE model, to fully exploit the latent value of limited transitions under sparse reward conditions.

4.1. Structure of Algorithm

It is well known that DRL has been applied to solve numerous problems across diverse research fields, including Atari games, chess, robotics, and other decision-making and control problems. In this paper, we adopt the actor–critic architecture as the foundation of REDCRL, which features model-free and end-to-end characteristics and is well suited for UAV-maneuvering decision-making in LRGDEs. In particular, we employ Bi-LSTM to construct both the actor network and the critic network within the actor–critic architecture, leveraging Bi-LSTM’s strengths in processing temporal sequential data to mitigate the partial state observability problem in LRGDEs. On the other hand, to address the challenges of limited experience and sparse reward encountered when applying traditional DRL algorithms to LRGDEs, we introduce CL to reshape the policy training process. This includes a curriculum-based dynamic task objective and a comprehensive transition-evaluation experience replay method.
Figure 4 illustrates the overall framework of REDCRL, which consists of the BLPN, AMFER (incorporating an ADT mechanism and an MFTE model), and other necessary modules. While employing REDCRL to solve LRGDEs, the BLPN determines the current action a t of the UAV, leveraging observations o t and rewards r t derived from the LRGDE environment. During each episode, the ADT mechanism in AMFER decides whether to terminate the episode using a dynamic task objective, which acts as the criterion for judging LRGDE termination conditions. Furthermore, each transition o t , a t , r t , o t + 1 generated via interactions between the BLPN policy and the LRGDE environment is stored into the experience memory. The MFTE model in AMFER is then adopted to evaluate the priority of each stored transition. Accordingly, a batch of transitions is sampled from the experience memory based on the sampling probabilities assigned according to their priorities. Finally, these sampled transitions are used to update and optimize the BLPN. In the subsequent sections, we elaborate on the design of the BLPN and AMFER (including the ADT mechanism and MFTE model).

4.2. Bi-LSTM-Modified Policy Networks

To address the sequential decision-making problem posed by partial observability in LRGDEs, a policy network that can effectively process historical observation–action sequences is required. Although various neural architectures are available for time-series data modeling, we select the Bi-LSTM for its proven effectiveness in capturing mid-range temporal dependencies and its computational efficiency for real-time embedded system applications. Compared with recent alternatives such as the Transformer architecture, which excels in modeling long-range contextual dependencies, Bi-LSTM imposes a much lower computational cost. Figure 5 illustrates the structure of the BLPN, which consists of an actor network and a critic network. Both the actor network and the critic network follow a three-module structure: an input block, a middle block, and an output block. Since the input information includes the current action a t and the historical trajectory h t N o (a time-series vector consisting of recent states and actions), the input block is built upon Bi-LSTM, a variant of RNNs that processes sequential data in both forward and backward directions. The middle block and the output block are constructed using fully connected networks (FCNs), consistent with the policy networks used in traditional DRL algorithms.
Specifically, the input to the actor network π po h t N o | ϕ po consists of observations from the LRGDE environment, which are categorized into normal observation and sequential observation. At each decision step, the agent receives an observation o t that includes the normal observation o t 1 for the UAV’s flight state and the relative observation o t 2 for the target and threats. In this work, we consider that o t 1 represents fully observable information about the UAV’s flight state and does not need to be included in sequential observations, which helps to reduce network size and conserve computational resources. In contrast, o t 2 represents partially observable information about the target and threats relative to the UAV, which can be obtained via various airborne sensors.
Subsequently, o t 2 and a t over a period of time are used to construct the sequential observation, as illustrated in Figure 5. During interactions between the agent and the LRGDE environment, h t N o is input into π po h t N o | ϕ po , which outputs the corresponding action a t . In contrast to the actor network, the current action a t is incorporated into the input of the critic network, Q po h t , a t | θ po . The critic network then outputs the Q-value Q po h t , a t to guide the optimization of the actor network π po h t N o | ϕ po .
The critic network Q po h t , a t | θ po could be optimized based on TD3 [24], and the optimization target is defined as
y = r t + γ min i = 1 , 2 Q po i h t + 1 , π ϕ h t + 1
where Q po i h , a | θ po i , i = 1 , 2 are the target critic networks, and the transition h t , a t , r t , h t + 1 is used to calculate the optimization target y . Thereby, we could optimize the hyperparameters of critic networks by the loss function, defined as
L θ i = i y i Q po i h t , a t | θ po i 2
where Q po i h , a | θ po i , i = 1 , 2 are the critic networks and y i is the target of the corresponding critic networks.
The policy gradient of the actor network π po h t N o | ϕ po is calculated according to the RDPG theorem [47].
ϕ po J = 1 N i π po Q θ h i , π po h i | ϕ po | θ ϕ po π po h i | ϕ po

4.3. Adaptive Multi-Feature Evaluation Experience Replay

Based on the structure of REDCRL, it is critical to restructure the policy training process for effective policy learning. Reasonable restructuring of the policy training process helps to address the challenges of limited experience and sparse reward when applying DRL to agent training in LRGDEs. As illustrated in Figure 6, we design the structure of AMFER, which comprises an ADT mechanism and an MFTE model.
During the policy training process, to reduce the computational burden of algorithm training, we design an ADT mechanism following the “from easy to difficult” principle, which is analogous to the sequential learning processes in human education and animal training. The ADT mechanism generates a dynamic task objective based on the state of the LRGDE environment to determine whether the mission in the current episode is successful or unsuccessful. When determining the current task objective, we select the most appropriate goal for the current policy from a predesigned curriculum comprising a sequence of tasks arranged in ascending order of difficulty.
Furthermore, during interactions between the UAV-maneuvering decision-making policy and the LRGDE environment, transitions are stored in the experience memory, where a batch of transitions is sampled and utilized to optimize the policy. Once a transition is stored in the experience memory, the priority of each transition is evaluated by the MFTE model, which is developed based on the traditional ER method and integrates a comprehensive evaluation function. Prior to policy training, these transitions are sampled based on the sampling probabilities associated with the priorities of the transitions.

4.3.1. Adaptive Dynamic Termination Mechanism

For LRGDEs, the successful termination of the mission depends on the distance between the UAV and the target, as well as the UAV’s detection range. If the current distance between the UAV and the target satisfies Equation (2), the simulation episode is terminated successfully. Accordingly, the UAV’s detection range R det defines the mission difficulty for the current episode. In the ADT mechanism, we define a dynamic task objective that can automatically adjust itself based on the performance of the current policy.
First, we define a curriculum, which consists of a sequence of subtasks and is formulated as
C R M = g 1 , g 2 , , g n
where g i is a subtask. G represents the goal of g i .
In this paper, considering the definition of LRGDEs, the subtask goal G is related to Equation (2) and the key parameter of the dynamic task goal, i.e., the control variable of G , is the UAV’s detection range R det . Therefore, we can construct a curriculum about LRGDEs in terms of R det , which is defined as R det R det min , R det max . Among the curriculum C R M , every subtask g i is defined by R det i and all of the subtasks are sorted by the difficulty of the subtask goal in ascending order. While sorting the subtasks, the difficulty evaluation function f C L T G g i 0 , 1 is defined as
f C L T G g i = R det max R det R det max R det min
According to the function defined above, we can obtain a curriculum arranged in ascending order of difficulty. In addition, during the policy training process, we design a module to update the current task objective, which in turn determines the termination condition of the current episode.
In this work, we consider that the difficulty perceived by the current policy is associated with the current task objective of LRGDEs. Therefore, after constructing the curriculum, we adopt the Dynamic Success Rate (DSR) metric to evaluate the performance of the current policy and determine whether to switch the current subtask objective. The DSR can be computed as
D S R = N D S R s N D S R
where N D S R denotes the total number of simulation episodes used for DSR calculation, and N D S R s represents the number of successfully completed episodes. Moreover, during policy training, the simulation results used for DSR calculation are obtained from the most recent N D S R experiments. Accordingly, we can determine whether to switch the current subtask goal by D S R C S R , and the Converged Successful Rate (CSR) acts as the threshold for judging whether the current policy has converged.
Figure 7 illustrates the structure of the ADT mechanism designed in this section. During the operation of the ADT mechanism, we first construct a curriculum according to the definition of the target LRGDE. During policy training, the DSR metric is evaluated to quantify the policy performance based on feedback from the LRGDE environment. The DSR is then used to determine whether to update the current termination condition. After the termination condition is switched to a more difficult subtask objective, the old transitions collected under the previous subtask objective are retained in the experience memory. Although the policy is learning subtask g t , these old transitions from subtask g t 1 still supply the policy with valuable prior knowledge.
The pseudocode of the ADT mechanism is shown in Algorithm 1.
Algorithm 1. The ADT Mechanism in AMFER
1:Initialize a curriculum C R M including a set of subtasks and select g 1 as the current subtask.
2:while not terminate the training experiment do
3:    Obtain the state s t from LRGDE environment;
4:    Calculate the current D S R according to the current subtask g t ;
5:    if  D S R C S R  then
6:       Switch current goal to next subtask g t + 1 ;
7:    end if
8:end while

4.3.2. Multi-Feature Transition Evaluation Model

In addition to the ADT mechanism, reshaping the sampling process from the experience memory is critical for effective policy training. Traditional ER methods, such as UER and PER, can facilitate online learning for DRL algorithms within the training environment. However, they face limitations when applied to UAV-maneuvering decision-making problems in LRGDEs, since relying solely on TD-error to evaluate transition priorities is inadequate. In addition to UER and PER, various CL-based algorithms with improved transition selection strategies have been proposed in recent years, including HER, DCRL, ERO, CHER, CER, LSER, and ACER. However, evaluating sample priority from only a single factor (e.g., TD-error, mission objective, cumulative reward, or sample diversity) remains insufficient. Accordingly, we design the MFTE model to comprehensively evaluate the priorities of transitions.
Figure 8 illustrates the structure of the MFTE model, whose core component is a comprehensive transition evaluation function composed of three evaluation factor terms: learning value, diversity, and intrinsic value. After each sampling and policy training iteration, the priorities of the sampled transitions are re-evaluated according to the current policy, and the updated priorities are used to compute the sampling probabilities for the next iteration. The comprehensive transition evaluation function is defined as
f C L T E i = f C L L V i + f C L D V i + f C L I V i
where f C L L V i denotes the learning value factor, f C L D V i denotes the diversity factor, and f C L I V i denotes the inherent value factor.
Within the MFTE model, the learning value of a transition characterizes the degree to which the transition contributes to policy optimization, and the learning value factor f C L L V i is computed based on the TD-error δ i of the transition. In DL, transitions with large TD-error magnitudes require a smaller learning step to adapt to the curvature of the objective function. The larger the TD-error δ i , the greater the impact of the transition on the current policy, and thus the more the transition should be utilized for learning. In CL, f C L L V i must satisfy specific constraints [34,37]. In this work, we design the learning value factor function, which is formulated as
f C L L V i = exp k 1 δ λ δ λ exp k 2 λ δ δ > λ
where k 1 and k 2 are used to adjust the slope of function. δ denotes the loss of the i -th transition for the current network. λ represents the curriculum factor that indicates the learning stages.
Apart from the learning value, maintaining data diversity is also critical for effective policy training. In ER, excessive reuse of redundant transitions can lead to severe overtraining, and the policy is highly prone to becoming trapped in a local optimum. Therefore, maintaining the diversity of transitions sampled from the experience memory is one of the key issues to prevent the policy from becoming trapped in a local optimum. In this work, to achieve the exploration–exploitation tradeoff and maintain sufficient exploration in the state and action spaces, the diversity factor f C L D V i is adopted to enhance the diversity of sampled transitions, and f C L D V i is formulated as
f C L D V i = N D V max N D V i N D V max
where N D V max denotes the maximum number of times all transitions in the experience memory have been sampled and N D V i represents the number of times transition i is sampled.
Finally, to facilitate the policy’s faster convergence to a near-optimal solution, we incorporate the reward of each transition into the comprehensive evaluation value. In DRL algorithms, rewards can guide the policy to learn effective behaviors. In other words, rewards serve as the specified learning guidance for the mission. Therefore, from a non-generalizability perspective, prioritizing transitions with higher rewards for learning can accelerate the policy’s convergence. Accordingly, we specifically integrate the intrinsic value factor f C L I V i into the comprehensive transition evaluation function, and f C L I V i is formulated as
f C L I V i = r i r max
where r i is the reward of the i-th transition and r max denotes the maximum reward across all transitions in the experience memory.
Based on the comprehensive transition evaluation function f C L T E i , the priority p i = f C L T E i of each transition can be obtained. Accordingly, we can compute the sampling probability P i of each transition, which is defined as
P ( i ) = p i α k p k α
where α is a hyperparameter used to control the influence of transition priority on the sampling probability. Moreover, since the transition distribution from the environment is altered, Importance Sampling (IS) weights [48] are adopted to correct the distribution bias caused by AMFER. The cumulative gradient of critic network could be calculated by
Δ i = j ω j δ j θ po i Q po i h , a | θ po i , i = 1 , 2
where δ j is TD-error of the j-th transition and Δ i is the cumulative gradient of the critic network Q po i h , a | θ po i .
The pseudocode of the MFTE mechanism is shown in Algorithm 2.
Algorithm 2. The MFTE Model in AMFER
1:for each transition i  in  B  do
2:    Calculate the TD-Error of the transition i according to current π po h | ϕ po and Q po j h , a | θ po j , j = 1 , 2 ;
3:    Calculate the learning value factor f C L L V i according to Equation (21);
4:    Count the N D V max and N D V i , and calculate the diversity factor f C L D V i according to Equation (22);
5:    Search for the r max among B and calculate the inherent value factor f C L I V i according to Equation (23);
6:    Calculate the comprehensive transition evaluation value f C L T E i according to Equation (20);
7:    Update the priority p i of i -th transition based on f C L T E i and calculate the probability P i of i -th transition.
8:end for

4.4. Policy Training Process of REDCRL

As illustrated in Figure 4, during REDCRL training in LRGDEs, the BLPN generates the action a t based on the observation o t and the reward r t obtained from the LRGDE environment. Subsequently, transitions generated during training are stored in the experience memory. After each training epoch, the MFTE model integrated into AMFER is adopted to evaluate the priorities of sampled transitions. Meanwhile, the ADT mechanism integrated into AMFER is employed to determine whether to terminate the current simulation episode and update the task objective of the LRGDE environment.
Based on the implementations of the BLPN and AMFER modules, the integrated REDCRL algorithm is presented in Algorithm 3.
Algorithm 3. The REDCRL algorithm for UAV-maneuvering decision-making in LRGDEs
1:Initialize policy networks π po h | ϕ po and Q po i h , a | θ po i , i = 1 , 2 and their target networks π po h | ϕ po , Q po i h , a | θ po i , i = 1 , 2 .
2:for  m = 1 to M  do
3:    Reset environment and obtain the initial observation o 0 ;
4:    Construct the history trajectory h 0 N o and output a 0 = π po h 0 N o | ϕ po ;
5:    for  t = 1 to T  do
6:        Observe current observation o t and calculate current action a t ;
7:        Observe next observation o t + 1 and receive reward r t from environment, and store transition o t , a t , r t , o t + 1 .
8:         if  t mod k = 0   then
9:            Reset the gradient Δ = 0 of critic networks with IS;
10:            Sample a batch of transitions according to the sampling probabilities of transitions;
11:            Accumulate the parameters gradient of Q po i h , a | θ po i , i = 1 , 2 according to Equation (25).
12:            Update the parameters of Q po i h , a | θ po i according to Δ i with learning rate η c ;
13:            Update the parameters of actor network π po h | ϕ po according to Equation (16);
14:            Update the priorities of these transitions used for training according to MFTE defined in Algorithm 2
15:         end if
16:        If state s t + 1 meets Equation (2) then
17:            Start the next episode.
18:            Update the task goal according to ADT defined in Algorithm 1
19:        else if state s t + 1 satisfies Equation (1) then
20:            Start the next episode.
21:         end if
22:     end for
23:end for

5. Simulation Experimental Results and Analysis

In this section, we design a series of experiments to verify the effectiveness of REDCRL and demonstrate its performance advantages compared with several traditional DRL algorithms. Furthermore, we evaluate the performance of the trained policy in the LRGDE environment and conduct ablation experiments to validate the individual contributions of each module to the overall performance of REDCRL.

5.1. Experiments Settings

Before conducting the experiments, we design the experimental scenario, including the initial environmental state and other environmental parameters. Meanwhile, we select several representative traditional algorithms and compare them with REDCRL to validate its performance. To quantify the algorithmic performance, we present several quantitative evaluation metrics. Finally, we specify the hyperparameter settings used in the experiments.

5.1.1. Experiments’ Scenario

In LRGDEs, the mission zone is restricted to 100 km × 100 km airspace and the height of the UAV is bounded to 500~10,000 m. Figure 9 illustrates the layout of the mission zone, which consists of the UAV launch area, threat areas, and the target area. Before each simulation run, the initial position of the UAV is randomly generated within the UAV launch area. Meanwhile, the initial positions of threats are randomly deployed within the threat areas. The target is reset to a predefined initial position.
The initial positions of the UAV and threats are randomly sampled from a uniform distribution. Specifically, these threats are distributed around the straight line connecting the UAV and the target. At the start of each simulation episode, the UAV policy makes a decision every 0.1 s.

5.1.2. Contrast Settings

To evaluate the performance of REDCRL, several SOTA algorithms are also employed to train policies in the LRGDE environment for comparative analysis. DDPG [23] and TD3 [24], two traditional DRL algorithms based on the actor–critic structure, serve as the baseline methods for REDCRL. RDPG (a variant of DDPG) [47] and RTD3 (a variant of TD3) [49] are variants of traditional DRL algorithms tailored for POMDPs. DCRL (a classic variant of DRL with improved ER) [34] is another classic variant of traditional DRL, in which ER is enhanced based on CL. Accordingly, we aim to validate the effectiveness of REDCRL by comparing it with DDPG, TD3, RDPG, RTD3, and DCRL across two key metrics: policy convergence speed and the performance of the trained policies.
In addition to the comparison with the aforementioned DRL variants, we also conduct an ablation experiment to validate the individual contributions of each module to REDCRL’s overall performance. In the ablation experiment, we run REDCRL, TD3 + BLPN, TD3 + AMFER, and TD3 independently in the LRGDE environment. This is because TD3 serves as the baseline trainer for our proposed REDCRL, while BLPN and AMFER are the two core modules of REDCRL.

5.1.3. Evaluation Metrics

To quantify the performance of REDCRL and the other baseline comparison algorithms, we define several metrics to evaluate the convergence speed and overall performance of these algorithms.
  • Successful Rate (SR): The ratio of successfully completed mission to the total number of experiments.
  • Dynamic Successful Rate (DSR): The ratio of successfully completed missions in the 50 most recent experiments.
  • Peak Successful Rate (PSR): The maximum successful rate during the whole course of policy training.
  • Valley Successful Rate (VSR): The minimum successful rate during the whole course of policy training.
  • Learning Time (LT): The number of episodes where DSR first reaches CSR.
  • Converged Policy Stability (CPS): The standard deviation of DSR after DSR reaches CSR.
  • Converged Policy Performance (CPP): The average DSR after DSR reaches CSR.
  • Average Inference Time (AIT): The average decision time of the policy during the whole course of policy training.
In addition to the metrics defined above, we define a threshold, namely CSR, to determine whether the policy has converged. In accordance with expert experience and common conventions in the field, the CSR is set to 80% in this work.

5.1.4. Parameters Assignment of REDCRL

Based on the definitions of the LRGDE environment and REDCRL, the parameter settings for REDCRL are provided in the GitHub repository at https://github.com/KeLi0000/REDCRML (accessed on 12 January 2026).
In addition, we train REDCRL with different historical trajectory lengths to analyze the impact of the hyperparameter N o and identify an optimal value that fully leverages the advantages of BLPN for solving POMDPs. As presented in Figure 10, we collect the training data generated by REDCRL with different values of N o . On the PSR, VSR, CPP, and CPS metrics, REDCRL achieves superior performance over all other cases when N o = 15 . Therefore, for all subsequent experiments, we adopt REDCRL with N o = 15 for policy training.

5.2. Training Experiments of the Algorithms

According to the experimental settings in Section 5.1, we conduct training experiments for REDCRL and the other comparison algorithms and collect the data generated during policy training. We then evaluate and analyze these training data.
Table 2 summarizes the evaluation results obtained using DDPG, RDPG, TD3, RTD3, DCRL, and REDCRL. REDCRL outperforms all other algorithms in terms of LT, with an average improvement of 75.7% over the other methods. Based on the ablation experiment results, this confirms that REDCRL can accelerate policy convergence by introducing the AMFER module. For instance, REDCRL achieves an improvement of approximately 83.9% over DDPG, the worst-performing method in terms of LT, and an improvement of 58.5% over DCRL, which is the best-performing algorithm among all baseline methods. Furthermore, the policy trained by REDCRL also exhibits superior performance over the other classic DRL algorithms with respect to CPP and CPS. Specifically, REDCRL outperforms the other algorithms by an average of 19.1% in CPP performance and by approximately 46.5% in CPS stability. In addition, REDCRL surpasses the best-performing baseline algorithm by 6.2% and the worst-performing one by 48.3% in terms of CPP. For the CPS metric, REDCRL outperforms the best baseline algorithm by 20.1% and the worst by 66.7%. Beyond the metric-wise analysis above, REDCRL also achieves favorable peak performance in terms of PSR.
In detail, we plot the curves of DSR versus training episodes for different algorithms, as illustrated in Figure 11. The results in the following graphs intuitively demonstrate that the proposed REDCRL can accelerate policy convergence and improve the performance of the trained policy. In particular, the policy convergence speed is significantly enhanced in terms of LT owing to the introduction of BLPN and AMFER.

5.3. Testing Experiments of the Trained Policy

To verify the effectiveness of the trained policy in the LRGDE task, we conducted extensive experiments in scenarios similar to the training environment. Policies trained using DDPG, RDPG, TD3, RTD3, DCRL, and REDCRL were subjected to 100 tests under the scenarios specified in Section 5.1.1. We collected and analyzed the results from these extensive tests, with the SR performance summarized in Table 3. Based on the evaluation results in Table 3, we found that the policy trained by REDCRL outperformed those of all other algorithms. In particular, REDCRL achieved an average SR performance approximately 16.0% higher than all other algorithms, 26.76% higher than the worst-performing algorithm, and 8.43% higher than the best-performing baseline algorithm in terms of SR.
Furthermore, we selected a set of test results and visualized the simulation process, including the UAV flight trajectory and the LRGDE execution flow, using the Tacview flight analysis software (version number 1.9.5), as illustrated in Figure 12. Figure 12(a1–f1) plots the UAV flight trajectories: The red solid line denotes the UAV flight trajectory, the red solid dot marks the UAV’s initial position, the red “×” denotes the UAV’s terminal position, the blue solid line denotes the target trajectory, the blue solid dot marks the target’s initial position, the blue “×” denotes the target’s terminal position, and the red dashed line represents the trajectory of the payload released by the UAV. In addition, the green hemispherical surface denotes the threat influence range, whose radius corresponds to the threat influence radius R thr i .
As clearly demonstrated in Figure 12, the REDCRL algorithm exhibits superior performance across multiple evaluation metrics relative to conventional DRL methods. The trajectory visualization in subfigures (f1) and (f2) reveals significantly improved smoothness and continuity: the generated path satisfies optimal curvature constraints while enabling efficient navigation in complex threat environments. This improved trajectory quality directly translates to higher flight stability and more efficient mission execution.
A comparative analysis with the other baseline methods (subfigures (a1)–(e1)) further validates REDCRL’s strong capability in addressing the challenges of LRGDE missions. The algorithm achieves outstanding performance in balancing obstacle avoidance and mission objectives, realizing more accurate target capture with minimal path deviation while sustaining an optimal trade-off between safety and efficiency during the entire navigation process.

5.4. Additional Experiments for REDCRL

In addition to the comparative experiments between REDCRL and other conventional algorithms, we performed extra experiments to evaluate the generalization ability of REDCRL by testing the trained policy in complex LRGDE scenarios with varying numbers of threats. Furthermore, to validate the effects of the BLPN and AMFER modules on REDCRL performance, we designed corresponding ablation experiments and analyzed the results.

5.4.1. Testing REDCRL in LRGDE with a Different Number of Threats

To evaluate the generalization ability of REDCRL in complex scenarios, we conducted extensive experiments to test the policy trained by REDCRL. In these experiments, the number of threats in the environment was set to 5, 6, 7, 8, 9, and 10, respectively.
Table 4 presents the SR of the trained policy when tested in LRGDE scenarios with different numbers of threats. We observe that the SR gradually decreases as the number of threats increases, but remains above 60% even when 10 threats are present. These results confirm that the policy trained by REDCRL exhibits favorable generalization ability. By further training the REDCRL policy in LRGDE scenarios with more threats, an even more robust policy capable of reliably handling LRGDE tasks with denser threats can be obtained.

5.4.2. Ablation Experiments on REDCRL

To evaluate the impacts of BLPN and AMFER on REDCRL performance, we conducted ablation experiments. In these experiments, we removed BLPN and AMFER from REDCRL separately to examine their individual effects on both policy convergence speed and converged performance.
As presented in Table 5, we considered that the policy was trained separately by REDCRL under four different settings, REDCRL without BLPN and AMFER, REDCRL without AMFER, REDCRL without BLPN, and the complete REDCRL. We evaluated the training process of each configuration by PSR, LT, CPP and CPS. A comparison between Case 1 and Case 2 shows that BLPN can significantly enhance the performance and stability of the algorithm in terms of CPP and CPS, while accelerating policy convergence in terms of LT, owing to its strengths in solving POMDPs. A comparison between Case 1 and Case 3 verifies that AMFER can substantially speed up policy convergence in terms of LT and boost the performance of the trained policy, thanks to its ability to address the issues of limited experience and sparse rewards. From the comparison between Case 1 and Case 4, we observe that both convergence speed and final performance are notably improved by introducing BLPN and AMFER together.
Therefore, BLPN and AMFER both contribute to improving the convergence speed and final converged performance of REDCRL, where BLPN mainly enhances policy performance, while AMFER is primarily dedicated to accelerating policy convergence. Both modules exert positive effects on REDCRL performance, and the combined improvement achieved by using both modules is significantly greater than that of either module alone.

6. Conclusions

In the present work, we described the LRGDE mission and refined the UAV-maneuvering decision-making model for LRGDEs based on POMDPs. Based on the definitions of LRGDEs, we proposed REDCRL based on DRL for optimizing policy to perform LRGDE missions. Specifically, we design BLPN to exploit the partially observable state information in LRGDEs, which enables superior policy performance over traditional DRL algorithms. Furthermore, we develop AMFER to reshape the experience sampling strategy from historical data, aiming to maximize the latent utility of limited transitions under sparse reward conditions. AMFER consists of the ADT mechanism and the MFTE model. Simulation results demonstrate that REDCRL effectively accelerates policy convergence and enhances the performance of the converged policy. In terms of LT, REDCRL achieves a 75.7% convergence speed improvement over classical DRL algorithms, while in terms of CPP, it outperforms traditional DRL methods by 19.1%. REDCRL also maintains strong generalization ability in LRGDE scenarios with varying numbers of threats. Ablation experiments further validate that the integration of BLPN and AMFER yields far greater performance gains than either module individually.
In future work, we will investigate targets with intelligent maneuvering policies, which transforms the UAV-maneuver decision-making problem in LRGDEs into a complex decision-making task with game-theoretic characteristics. Additionally, we will construct a physical experimental testbed in the laboratory and deploy the policy learned by REDCRL with physical UAVs for real-world flight validation.

Author Contributions

Conceptualization, K.L. and K.Z.; methodology, K.L., Z.W. and H.P.; software, K.L., B.W. and J.C.; validation, K.L. and B.Y.; formal analysis, K.L.; investigation, K.L.; resources, K.Z. and H.P.; data curation, K.Z.; writing—original draft preparation, K.L. and Z.W.; writing—review and editing, Z.W. and B.Y.; visualization, B.W.; supervision, K.Z.; project administration, K.Z.; funding acquisition, K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Project (Grant No. JCKY2024205B032) and the Fundamental Research Funds for the Central Universities (Grant No. G2025KY06217).

Data Availability Statement

The code used in this study is publicly available on GitHub in the repository https://github.com/KeLi0000/REDCRML (accessed on 12 January 2026).

Conflicts of Interest

Author Binlin Yuan was employed by the company Sichuan Tengden Sci-Tech Innovation Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVsUnmanned Aerial Vehicles
LRGDELong-Range UAV Guidance in Dynamic Environments
REDCRLRNN-Enhanced Diverse Curriculum-Driven Learning
BLPNBi-LSTM-Modified Policy Networks
AMFERAdaptive Multi-Feature Evaluation Experience Replay
TD3Twin Delayed Deep Deterministic Policy Gradient
MDPsMarkov Decision Processes
RLReinforcement Learning
DRLDeep Reinforcement Learning
POMDPsPartially Observable Markov Decision Processes
MCTSMonte Carlo Tree Search
RNNRecurrent Neural Networks
UERUniform Experience Replay
PERPrioritized Experience Replay
ERExperience Replay
CLCurriculum Learning
DLDeep Learning
DQNDeep Q Network
DDPGDeep Deterministic Policy Gradient
LOSLine of Sight
ADTAdaptive Dynamic Termination
MFTEMulti-Feature Transition Evaluation
SRSuccessful Rate
DSRDynamic Successful Rate
CSRConverged Successful Rate
PSRPeak Successful Rate
VSRValley Successful Rate
LTLearning Time
CPSConverged Policy Stability
CPPConverged Policy Performance
AITAverage Inference Time

References

  1. Menouar, H.; Guvenc, I.; Akkaya, K.; Uluagac, A.; Kadri, A.; Tuncer, A. UAV-enabled intelligent transportation systems for the smart city: Applications and challenges. IEEE Commun. Mag. 2017, 55, 22–28. [Google Scholar] [CrossRef]
  2. Lyu, X.; Li, X.; Dang, D.; Dou, H.; Wang, K.; Lou, A. Unmanned Aerial Vehicle (UAV) Remote Sensing in Grassland Ecosystem Monitoring: A Systematic Review. Remote Sens. 2022, 14, 1096. [Google Scholar] [CrossRef]
  3. Ukaegbu, U.; Tartibu, L.; Okwu, M. Unmanned Aerial Vehicles for the Future: Classification, Challenges, and Opportunities. In Proceedings of the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems, Durban, South Africa, 5–6 August 2021. [Google Scholar]
  4. Lee, S.; Song, Y.; Kil, S.H. Feasibility analyses of real-time detection of wildlife using UAV-derived thermal and RGB images. Remote Sens. 2021, 13, 2169. [Google Scholar] [CrossRef]
  5. Yang, L.; Qi, J.; Xiao, J. A Literature Review of UAV 3D Path Planning. In Proceedings of the 11th World Congress on Intelligent Control and Automation, Shenyang, China, 29 June–4 July 2014. [Google Scholar]
  6. Kaluđer, H.; Brezak, M.; Petrović, I. A Visibility Graph-Based Method for Path Planning in Dynamic Environments. In Proceedings of the 34th International MIPRO ICT and Electronics Convention, Opatija, Croatia, 23–27 May 2011. [Google Scholar]
  7. Sun, Q.; Li, M.; Wang, T.; Zhao, C. UAV Path Planning Based on Improved Rapidly-Exploring Random Tree. In Proceedings of the Chinese Control and Decision Conference, Shenyang, China, 9–11 June 2018. [Google Scholar]
  8. Yan, F.; Liu, Y.S.; Xiao, J.Z. Path planning in complex 3D environments using a probabilistic roadmap method. Int. J. Autom. Comput. 2013, 10, 525–533. [Google Scholar] [CrossRef]
  9. Tseng, F.H.; Liang, T.T.; Lee, C.H.; Der Chou, L.; Chao, H.C. A Star Search Algorithm for Civil UAV Path Planning With 3G Communication. In Proceedings of the 10th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014. [Google Scholar]
  10. Meng, B. UAV Path Planning Based on Bidirectional Sparse A* Search Algorithm. In Proceedings of the International Conference on Intelligent Computation Technology and Automation, Changsha, China, 11–12 May 2010. [Google Scholar]
  11. Ferguson, D.; Stentz, A. Using interpolation to improve path planning: The Field D* algorithm. J. Field Robot. 2006, 23, 79–101. [Google Scholar] [CrossRef]
  12. Silva Arantes, J.; Silva Arantes, M.; Motta Toledo, C.F.; Júnior, O.T.; Williams, B.C. Heuristic and genetic algorithm approaches for UAV path planning under critical situation. Int. J. Artif. Intell. Tools 2017, 26, 1760008. [Google Scholar] [CrossRef]
  13. Deng, W.; Feng, J.Y.; Zhao, H.M. Autonomous Path Planning via Sand Cat Swarm Optimization with Multi-Strategy Mechanism for Unmanned Aerial Vehicles in Dynamic Environment. IEEE Internet Things J. 2025, 12, 26003–26013. [Google Scholar] [CrossRef]
  14. Lee, H.; Kim, H.J. Trajectory tracking control of multirotor from modelling to experiments: A survey. Int. J. Control Autom. 2017, 15, 281–292. [Google Scholar] [CrossRef]
  15. Li, K.; Zhang, K.; Zhang, Z.; Liu, Z.; Hua, S.; He, J. A UAV Maneuver Decision-Making Algorithm for Autonomous Airdrop Based on Deep Reinforcement Learning. Sensors 2021, 21, 2233. [Google Scholar] [CrossRef]
  16. Zhang, K.; Li, K.; He, J.; Shi, H.; Wang, Y.; Niu, C. A UAV Autonomous Maneuver Decision-Making Algorithm for Route Guidance. In Proceedings of the International Conference on Unmanned Aircraft Systems, Athens, Greece, 1–4 September 2020. [Google Scholar]
  17. Li, K.; Zhang, K.; Liu, H.; Li, Y.; Wang, Q. An UAV Maneuvering Decision-Making Algorithm Based on Deep Transfer Reinforcement Learning. In Proceedings of the Congress of the International Council of the Aeronautical Science, Florence, Italy, 9–13 September 2024. [Google Scholar]
  18. Kaifang, W.; Bo, L.; Xiaoguang, G.; Zijian, H.; Zhipeng, Y. A learning-based flexible autonomous motion control method for UAV in dynamic unknown environments. J. Syst. Eng. Electron. 2021, 32, 1490–1508. [Google Scholar] [CrossRef]
  19. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A General Reinforcement Learning Algorithm that Masters Chess, Shogi, and Go Through Self-Play. Science 2018, 362, 1140–1144. [Google Scholar] [CrossRef]
  20. Silver, D.; Veness, J. Monte-Carlo planning in large POMDPs. In Proceedings of the 23rd Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 6–9 December 2010. [Google Scholar]
  21. Somani, A.; Ye, N.; Hsu, D.; Lee, W.S. DESPOT: Online POMDP Planning with Regularization. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013. [Google Scholar]
  22. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  23. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  24. Fujimoto, S.; Hoof, H.; Meger, D. Addressing Function Approximation Error in Actor-Critic Methods. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  25. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
  26. Haarnoja, T.; Zhou, A.; Abbeel, P.; Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  27. Zheng, K.; Zhang, X.; Wang, C.; Zhang, M.; Cui, H. A partially observable multi-ship collision avoidance decision-making model based on deep reinforcement learning. Ocean Coast. Manag. 2023, 242, 106689. [Google Scholar] [CrossRef]
  28. Zhang, R.; Zong, Q.; Zhang, X.; Dou, L.; Tian, B. Game of Drones: Multi-UAV Pursuit-Evasion Game with Online Motion Planning by Deep Reinforcement Learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 7900–7909. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, C.; Wang, J.; Shen, Y.; Zhang, X. Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 2124–2136. [Google Scholar] [CrossRef]
  30. Xue, Y.; Chen, W. A UAV Navigation Approach Based on Deep Reinforcement Learning in Large Cluttered 3D Environments. IEEE Trans. Veh. Technol. 2023, 72, 3001–3014. [Google Scholar] [CrossRef]
  31. Lin, L.J. Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 1992, 8, 293–321. [Google Scholar] [CrossRef]
  32. Schaul, T.; Quan, J.; Antonoglou, I.; Silver, D. Prioritized experience replay. In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  33. Andrychowicz, M.; Wolski, F.; Ray, A.; Schneider, J.; Fong, R.; Welinder, P.; McGrew, B.; Tobin, J.; Pieter, A.; Zaremba, W. Hindsight Experience Replay. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  34. Ren, Z.; Dong, D.; Li, H.; Chen, C. Self-Paced Prioritized Curriculum Learning with Coverage Penalty in Deep Reinforcement Learning. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2216–2226. [Google Scholar] [CrossRef]
  35. Zha, D.; Lai, K.H.; Zhou, K.; Hu, X. Experience Replay Optimization. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019. [Google Scholar]
  36. Fang, M.; Zhou, T.; Du, Y.; Han, L.; Zhang, Z. Curriculum-guided Hindsight Experience Replay. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  37. Hu, Z.; Gao, X.; Wan, K.; Wang, Q.; Zhai, Y. Asynchronous Curriculum Experience Replay: A Deep Reinforcement Learning Approach for UAV Autonomous Motion Control in Unknown Dynamic Environments. IEEE Trans. Veh. Technol. 2023, 72, 13985–14001. [Google Scholar] [CrossRef]
  38. Luo, S.; Kasaei, H.; Schomaker, L. Accelerating Reinforcement Learning for Reaching Using Continuous Curriculum Learning. In Proceedings of the International Joint Conference on Neural Networks, Glasgow, UK, 19–24 July 2020. [Google Scholar]
  39. Morad, S.D.; Mecca, R.; Poudel, R.P.K.; Liwicki, S.; Cipolla, R. Embodied Visual Navigation with Automatic Curriculum Learning in Real Environments. IEEE Robot. Autom. Lett. 2021, 6, 683–690. [Google Scholar] [CrossRef]
  40. Klink, P.; Yang, H.; D’Eramo, C.; Pajarinen, J.; Peters, J. Curriculum Reinforcement Learning via Constrained Optimal Transport. In Proceedings of the 39th International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022. [Google Scholar]
  41. Ma, H.; Dong, D.; Ding, S.X.; Chen, C. Curriculum-Based Deep Reinforcement Learning for Quantum Control. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 8852–8865. [Google Scholar] [CrossRef] [PubMed]
  42. Koprulu, C.; Simão, T.D.; Jansen, N.; Topcu, U. Safety-Prioritizing Curricula for Constrained Reinforcement Learning. In Proceedings of the International Conference on Learning Representations, Singapore, 24–28 April 2025. [Google Scholar]
  43. Xiao, C.; Lu, P.; He, Q. Flying Through a Narrow Gap Using End-to-End Deep Reinforcement Learning Augmented with Curriculum Learning and Sim2Real. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2701–2708. [Google Scholar] [CrossRef]
  44. Zijian, H.; Xiaoguang, G.; Kaifang, W.; Yiwei, Z.; Qianglong, W. Relevant experience learning: A Deep Reinforcement Learning method for UAV Autonomous Motion Planning in complex unknown environments. Chin. J. Aeronaut. 2021, 34, 187–204. [Google Scholar] [CrossRef]
  45. Wang, X.; Chen, Y.; Zhu, W. A Survey on Curriculum Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4555–4576. [Google Scholar] [CrossRef]
  46. Narvekar, S.; Peng, B.; Leonetti, M.; Sinapov, J.; Taylor, M.E.; Stone, P. Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey. J. Mach. Learn. Res. 2020, 21, 1–50. [Google Scholar]
  47. Heess, N.; Hunt, J.; Lillicrap, T.; Silver, D. Memory-based control with recurrent neural networks. arXiv 2015, arXiv:1512.04455. [Google Scholar] [CrossRef]
  48. Mahmood, A.; Hasselt, H.; Sutton, R. Weighted importance sampling for off-policy learning with linear function approximation. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  49. Yang, Z.H.; Nguyen, H. Recurrent Off-policy Baselines for Memory-based Continuous Control. arXiv 2021, arXiv:2110.12628. [Google Scholar]
Figure 1. Schematic of the LRGDE, depicting the UAV’s start position, mission zone, moving target trajectory, and distributed threats (e.g., mountains or no-fly zones). The red line indicates a sample UAV path, highlighting avoidance behavior and the objective area approach.
Figure 1. Schematic of the LRGDE, depicting the UAV’s start position, mission zone, moving target trajectory, and distributed threats (e.g., mountains or no-fly zones). The red line indicates a sample UAV path, highlighting avoidance behavior and the objective area approach.
Drones 10 00142 g001
Figure 2. The vector diagram of relationship among UAV, threats and moving targets involved in LRGDEs.
Figure 2. The vector diagram of relationship among UAV, threats and moving targets involved in LRGDEs.
Drones 10 00142 g002
Figure 3. The perception diagram of UAV observing threats existing in the surroundings.
Figure 3. The perception diagram of UAV observing threats existing in the surroundings.
Drones 10 00142 g003
Figure 4. Structure of REDCRL. REDCRL is composed of BLPN policy and AMFER.
Figure 4. Structure of REDCRL. REDCRL is composed of BLPN policy and AMFER.
Drones 10 00142 g004
Figure 5. Structure of Bi-LSTM-modified Policy Networks (BLPNs), including actor network and critic network.
Figure 5. Structure of Bi-LSTM-modified Policy Networks (BLPNs), including actor network and critic network.
Drones 10 00142 g005
Figure 6. Structure of Adaptive Multi-Feature Evaluation Experience Replay (AMFER) including adaptive dynamic termination mechanism and multi-feature transition evaluation model.
Figure 6. Structure of Adaptive Multi-Feature Evaluation Experience Replay (AMFER) including adaptive dynamic termination mechanism and multi-feature transition evaluation model.
Drones 10 00142 g006
Figure 7. Structure of ADT mechanism. The ADT automatically adjusts the termination condition of the current episode according to the pre-generated curriculum, and this process implicitly realizes knowledge transfer among the subtasks of curriculum.
Figure 7. Structure of ADT mechanism. The ADT automatically adjusts the termination condition of the current episode according to the pre-generated curriculum, and this process implicitly realizes knowledge transfer among the subtasks of curriculum.
Drones 10 00142 g007
Figure 8. Structure of MFTE model. During the sampling process from the experience memory, the comprehensive transition evaluation function involved in MFTE is used to calculate the priorities of transitions in the experience memory and generate the sampling probabilities of transitions.
Figure 8. Structure of MFTE model. During the sampling process from the experience memory, the comprehensive transition evaluation function involved in MFTE is used to calculate the priorities of transitions in the experience memory and generate the sampling probabilities of transitions.
Drones 10 00142 g008
Figure 9. Mission area definition of LRGDE environment. Before starting each simulation episode, the UAV’s initial position and the threats’ initial position are generated randomly within their respective regions. And the target is reset to the starting position.
Figure 9. Mission area definition of LRGDE environment. Before starting each simulation episode, the UAV’s initial position and the threats’ initial position are generated randomly within their respective regions. And the target is reset to the starting position.
Drones 10 00142 g009
Figure 10. The evaluation results of REDCRL with different N o . Red numbers in the figure denote the best performance achieved for each evaluation metric.
Figure 10. The evaluation results of REDCRL with different N o . Red numbers in the figure denote the best performance achieved for each evaluation metric.
Drones 10 00142 g010
Figure 11. The curves of DSR versus training episodes when training with DDPG, RDPG, TD3, RTD3, DCRL and REDCRL. From these results, REDCRL outperforms the other algorithms in terms of the convergence speed and demonstrates superior performance among these algorithms.
Figure 11. The curves of DSR versus training episodes when training with DDPG, RDPG, TD3, RTD3, DCRL and REDCRL. From these results, REDCRL outperforms the other algorithms in terms of the convergence speed and demonstrates superior performance among these algorithms.
Drones 10 00142 g011
Figure 12. The test results of 6 trained policy based on DDPG, RDPG, TD3, RTD3, DCRL and REDCRL.
Figure 12. The test results of 6 trained policy based on DDPG, RDPG, TD3, RTD3, DCRL and REDCRL.
Drones 10 00142 g012aDrones 10 00142 g012b
Table 1. The definitions and hyperparameters of event-based reward functions.
Table 1. The definitions and hyperparameters of event-based reward functions.
NO.Event NameAppearance ConditionImmediate Reward
1Lock-in of Line-of-Sight (LOS) azimuthThe LOS azimuth falls within the sensor’s maximum detection range.+1.0
2Positive LOS approaching rateThe UAV’s approach rate toward the target area becomes positive.+1.0
3Disappearance of a frontal threatA threat previously within the UAV’s frontal field of view is no longer detected.+1.0
4Reduction in the number of observed threatsThe total count of currently observed threats decreases.+1.0
5Unlocking of LOS azimuthThe LOS azimuth moves outside the sensor’s maximum detection range.−1.0
6Appearance of a new threatA new threat enters the UAV’s field of view.−1.0
Table 2. The evaluation results on the training data using DDPG, RDPG, TD3, RTD3, DCRL and REDCRL. The REDCRL demonstrates superior performance among these algorithms.
Table 2. The evaluation results on the training data using DDPG, RDPG, TD3, RTD3, DCRL and REDCRL. The REDCRL demonstrates superior performance among these algorithms.
             Alg.
Metrics
DDPGTD3RDPGRTD3DCRLREDCRL
LT51548129522020083
CPP58%78%73%81%71%86%
CPS8.266 × 10−25.186 × 10−26.646 × 10−26.165 × 10−212.449 × 10−24.143 × 10−2
PSR80%88%86%96%98%98%
ADT2.003 ms2.172 ms20.868 ms21.314 ms2.535 ms20.219 ms
Table 3. The evaluation results of trained policy with DDPG, RDPG, TD3, RTD3, DCRL and REDCRL in terms of SR.
Table 3. The evaluation results of trained policy with DDPG, RDPG, TD3, RTD3, DCRL and REDCRL in terms of SR.
AlgorithmsDDPGTD3RDPGRTD3DCRLREDCRL
SR71%80%74%83%80%90%
Table 4. The SR of the trained policy based on REDCRL in LRGDEs with different numbers of threats, N t h r .
Table 4. The SR of the trained policy based on REDCRL in LRGDEs with different numbers of threats, N t h r .
N t h r 5678910
SR80%79%74%68%69%62%
Table 5. The results of ablation experiments for REDCRL in terms of PSR, LT, CPP, and CPS.
Table 5. The results of ablation experiments for REDCRL in terms of PSR, LT, CPP, and CPS.
NO.BLPN Yes?AMFER Yes?LTCPPCPSPSR
1××43765%12.926 × 10−288%
2×22081%6.165 × 10−296%
3×20071%12.449 × 10−298%
414186%5.234 × 10−298%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, K.; Zhang, K.; Wei, Z.; Piao, H.; Yuan, B.; Wang, B.; Cheng, J. An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience. Drones 2026, 10, 142. https://doi.org/10.3390/drones10020142

AMA Style

Li K, Zhang K, Wei Z, Piao H, Yuan B, Wang B, Cheng J. An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience. Drones. 2026; 10(2):142. https://doi.org/10.3390/drones10020142

Chicago/Turabian Style

Li, Ke, Kun Zhang, Ziqi Wei, Haiyin Piao, Binlin Yuan, Boxuan Wang, and Jiangbo Cheng. 2026. "An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience" Drones 10, no. 2: 142. https://doi.org/10.3390/drones10020142

APA Style

Li, K., Zhang, K., Wei, Z., Piao, H., Yuan, B., Wang, B., & Cheng, J. (2026). An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience. Drones, 10(2), 142. https://doi.org/10.3390/drones10020142

Article Metrics

Back to TopTop