Next Article in Journal
Hierarchical Route Planning Framework and MMDQN Agent-Based Intelligent Obstacle Avoidance for UAVs
Previous Article in Journal
The Survey of Evolutionary Deep Learning-Based UAV Intelligent Power Inspection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time UAV Flight Path Prediction Using GRU Networks for Autonomous Site Assessment

by
Yared Bitew Kebede
1,2,3,4,
Ming-Der Yang
2,3,4,*,
Henok Desalegn Shikur
1,2,3,4 and
Hsin-Hung Tseng
2,3,4
1
Faculty of Civil and Water Resources Engineering, Bahir Dar University, Bahir Dar P.O. Box 26, Ethiopia
2
Department of Civil Engineering, National Chung Hsing University, Taichung 40227, Taiwan
3
Smart Sustainable New Agriculture Research Center (SMARTer), National Chung Hsing University, Taichung 40227, Taiwan
4
Innovation and Development Center of Sustainable Agriculture, National Chung Hsing University, Taichung 40227, Taiwan
*
Author to whom correspondence should be addressed.
Drones 2026, 10(1), 56; https://doi.org/10.3390/drones10010056
Submission received: 8 December 2025 / Revised: 8 January 2026 / Accepted: 11 January 2026 / Published: 13 January 2026

Highlights

What are the main findings?
  • The proposed Gated Recurrent Unit (GRU) framework achieves superior real-time accuracy in single and multi-step trajectory prediction, outperforming traditional models while demonstrating exceptional stability and resistance to cumulative error propagation during simulated Global Positioning System (GPS) signal loss.
  • Analysis of X, Y, Z, and Time trajectories comparisons confirms that GRU consistently preserves path continuity, directional changes, and altitude variation, providing high-fidelity predictions even in nonlinear or oscillatory flight segments.
What are the implications of the main findings?
  • The GRU architecture’s computational efficiency enables real-time operation on embedded systems, facilitating intelligent onboard decision-making for Unmanned Aerial Vehicles (UAVs) in field applications.
  • The model’s resilience to cumulative error propagation during multi-step forecasting provides a critical safety buffer for autonomous missions, allowing UAVs to maintain stable navigation and mission continuity even during intermittent sensor failures or GPS signal disruptions.

Abstract

Unmanned Aerial Vehicles (UAVs) have become essential tools across critical domains, including infrastructure inspection, public safety monitoring, traffic surveillance, environmental sensing, and target tracking, owing to their ability to collect high-resolution spatial data rapidly. However, maintaining stable and accurate flight trajectories remains a significant challenge, particularly during autonomous missions in dynamic or uncertain environments. This study presents a novel flight path prediction framework based on Gated Recurrent Units (GRUs), designed for both single-step and multi-step-ahead forecasting of four-dimensional UAV coordinates, Easting (X), Northing (Y), Altitude (Z), and Time (T), using historical sensor flight data. Model performance was systematically validated against traditional Recurrent Neural Network architectures. On unseen test data, the GRU model demonstrated enhanced predictive accuracy in single-step prediction, achieving a MAE of 0.0036, Root Mean Square Error (RMSE) of 0.0054, and a (R2) of 0.9923. Crucially, in multi-step-ahead forecasting designed to simulate real-world challenges such as GPS outages, the GRU model maintained exceptional stability and low error, confirming its resilience to error accumulation. The findings establish that the GRU-based model is a highly accurate, computationally efficient, and reliable solution for UAV trajectory forecasting. This framework enhances autonomous navigation and directly supports the data integrity required for high-fidelity photogrammetric mapping, ensuring reliable site assessment in complex and dynamic environments.

1. Introduction

Recent advancements in autonomous control technologies have significantly expanded the operational capabilities of UAVs. This technological evolution has transitioned UAVs from traditional military-centric roles to essential tools for a broad range of civilian applications, including infrastructure inspection, public safety monitoring, traffic surveillance, environmental sensing, and object tracking [1,2]. Consequently, UAVs have emerged as versatile, high-performance mobile service platforms, favored for their ability to reduce labor costs, execute long-range operations beyond visual line of sight, and deliver greater speed and precision than manual control [3].
As UAVs are increasingly deployed for mission-critical tasks in complex, dynamic environments, ensuring safe, efficient, and adaptive navigation has become a central challenge. Urban operational environments present significant challenges for unmanned aerial vehicles (UAVs), including dynamic and unpredictable obstacles, severe electromagnetic interference, degradation of Global Positioning System (GPS) signals, and highly variable weather conditions [4]. These uncertainties can induce trajectory deviations that compromise mission success and data integrity, especially in scenarios lacking continuous human supervision [5]. Consequently, robust path planning has become a cornerstone of autonomous navigation, requiring the integration of global route generation with real-time, anticipatory local adjustments. Accurate path planning not only ensures collision-free operation but also optimizes energy consumption and mission duration, thereby improving overall operational reliability in complex environments [6,7].
Previous research has demonstrated the effectiveness of UAVs across diverse domains, including precision agriculture, disaster response, and environmental monitoring [8,9,10,11]. Despite this diversity, a common requirement across applications is maintaining stable, accurate flight trajectories to preserve data quality. From a broader perspective, Daud et al. [12] categorized UAV applications into mapping, search and rescue, and transportation, highlighting the universal need for reliable navigation systems. In the context of autonomous site assessment and infrastructure monitoring, UAV path planning strategies have progressively evolved toward real-time adaptability and predictive precision to manage both static and dynamic hazards effectively [13]. At the same time, event-camera perception combined with ego-motion compensation has enabled collision-free navigation in cluttered environments [14]. Optimization-driven methods have further improved trajectory smoothness and stability through minimum-snap formulations [15], and environmental disturbances in urban airspace have been addressed using deep learning-based wind field modeling coupled with turbulence-aware search strategies [16].
In large-scale coverage and cooperative missions, multi-UAV coordination has been enhanced through neighborhood-based dynamic programming [17] and virtual arc-length strategies that enable synchronized operation even in GPS-denied environments [18]. Resource allocation methods, including pheromone-guided exploration, have further improved the efficiency of distributed search [19]. Alongside these developments, machine learning has increasingly been applied to trajectory forecasting, ranging from regression-based object tracking methods [20] to Transformer-based Interacting Multiple Model filters for motion refinement [21]. Collectively, these studies emphasize the growing importance of integrating predictive modeling, sensor fusion, and learning-based techniques to enhance navigation robustness in high-fidelity civil engineering applications.
To address navigation in complex three-dimensional environments, deep learning–driven path-planning approaches have been introduced [22], complemented by multi-strategy fusion differential evolution algorithms to improve optimization performance [23]. Advances in adaptive control have also played a key role, including wind-adaptive formation control frameworks [24] and integrated planning-control strategies for handling trajectory uncertainty [25]. Prediction-oriented research has expanded to include three-dimensional layered visibility graphs for urban navigation [26], nonlinear time-series models for motion forecasting [27], and neural-network-based flight time estimation [28]. Beyond general navigation, learning-based methods have been applied to specialized maneuvers such as precision landing [29] and pose-based path prediction for fixed-wing aircraft [30]. These developments collectively reflect a clear trend toward the unification of advanced planning, prediction, and control to manage increasingly complex flight conditions.
Recent literature further reveals a decisive shift toward deep learning–based UAV trajectory prediction, driven by the need to exploit the intrinsic temporal structure of flight data [31]. Recurrent Neural Network (RNN) architectures have demonstrated strong performance in this domain. Bidirectional Long Short-Term Memory networks, enhanced with error compensation and data preprocessing, have achieved sub-meter prediction accuracy in real-world experiments [32]. Similarly, stacked bidirectional and unidirectional Long Short-Term Memory (LSTM) models have proven effective in forecasting future UAV positions using historical trajectory data [33]. Hybrid frameworks that combine dimensionality reduction and machine learning with neural predictors have also yielded low prediction errors [34]. A comprehensive survey by Dudukcu et al. [35] underscores the pivotal role of deep neural networks in advancing UAV autonomy. It identifies robust path planning and real-time prediction as persistent open challenges.
Despite these advances, many existing solutions do not fully address the specific requirements of autonomous site assessment. Some approaches focus primarily on control-execution layers [36], whereas others emphasize general anomaly detection using AutoML frameworks [37], often incurring excessive computational overhead. Velocity-based input models for agile maneuvers [38] and GRU-based obstacle-avoidance strategies [39] have demonstrated effectiveness in specific contexts but do not explicitly model the temporal pacing required for photogrammetric overlap. These limitations highlight the need for a trajectory prediction framework that is both temporally aware and computationally efficient, capable of operating under real-time constraints on embedded UAV hardware. Alongside deep learning, classical and metaheuristic algorithms continue to evolve for dynamic navigation. Sampling-based methods, such as Branch-Selected Rapidly Exploring Random Trees [40], have improved stability in urban environments. At the same time, optimization-based techniques, including Specialized Particle Swarm Optimization [41], Improved Grey Wolf Optimizers with local constraints [42], and enhanced Bat Algorithms for multi-UAV coordination [1], have addressed constrained three-dimensional search spaces. These efforts reinforce the importance of algorithmic innovation and highlight the growing necessity of predictive modeling to manage uncertainty rather than merely react to it proactively.
In specialized survey missions such as monitoring and infrastructure inspection, maintaining strict spatial accuracy is critical, as even minor deviations from planned trajectories can lead to data gaps or degraded reconstruction quality. Real-time predictive modeling is therefore essential for anticipating and mitigating trajectory errors caused by environmental disturbances. Integrating trajectory prediction directly into the flight control loop enables UAVs to maintain continuous coverage and high-fidelity data acquisition, even in Global Navigation Satellite System (GNSS)-challenged environments [43,44,45,46]. Despite substantial progress in both classical and learning-based approaches, achieving reliable real-time, multi-step-ahead trajectory prediction under signal loss remains a significant challenge. Many existing models struggle to account for environmental uncertainty and the cumulative propagation of error. To address these limitations, this study proposes a computationally efficient deep learning framework based on GRU networks for real-time UAV trajectory prediction. GRUs simplify the LSTM architecture by removing the separate memory cell and employing fewer gating mechanisms, thereby reducing computational complexity, accelerating convergence, and improving suitability for real-time inference.
Although recent advances in sequence modeling have introduced robust architectures, including Bidirectional GRUs (BiGRUs) and Transformer-based models with self-attention mechanisms, their direct deployment in real-time UAV navigation remains challenging. BiGRU models, while highly effective for offline analysis, depend on both past and future input states, thereby violating the causality requirement inherent to real-time flight control [38]. Similarly, Transformer-based approaches, despite their ability to capture long-range dependencies, suffer from quadratic computational complexity and substantial memory demands [47]. Such overhead often exceeds the processing capabilities of onboard embedded systems commonly used in commercial survey UAVs, resulting in unacceptable inference latency. Consequently, the standard GRU architecture is the most suitable choice for this application, offering an effective balance between temporal modeling capability and computational efficiency and enabling low-latency inference required for immediate flight path correction [38].
Despite substantial advances in both classical and deep learning–based approaches to UAV path planning and trajectory prediction, achieving the accuracy and foresight required for reliable real-time forecasting remains a significant challenge, particularly in complex terrain or under signal degradation. Many existing predictive models struggle to accommodate environmental uncertainties and external disturbances, which can adversely affect flight stability and compromise data quality. To address these challenges, this study proposes an advanced deep learning framework based on GRU networks for real-time, multi-step-ahead UAV trajectory prediction. The proposed approach is designed to deliver a computationally efficient, accurate, and robust solution that enhances autonomous navigation performance. By maintaining trajectory stability, the model supports the stringent spatial consistency required for high-quality photogrammetry, thereby mitigating data gaps that frequently arise during autonomous survey missions in uncertain or GNSS-challenged environments.
Building on this motivation, the present research introduces a data-driven, trajectory-aware prediction framework that leverages recurrent neural networks (RNNs) to forecast UAV spatial–temporal states, including three-dimensional position and time (X, Y, Z, T). RNNs are inherently well-suited to time-series modeling because they capture temporal dependencies and leverage historical patterns to predict future values. In UAV navigation, each predicted state is strongly dependent on prior flight dynamics, making recurrent architectures a natural choice for trajectory forecasting. However, conventional RNNs suffer from vanishing and exploding gradient problems, which limit their capacity to model long-term dependencies effectively [48]. To overcome these issues, Hochreiter and Schmidhuber introduced the Long Short-Term Memory (LSTM) architecture, which incorporates memory cells and gating mechanisms to preserve relevant information over extended sequences [48]. Although LSTMs have demonstrated strong performance in complex time-series prediction tasks, including UAV navigation, they are often associated with higher computational costs, slower convergence, and an increased risk of overfitting when trained on limited or noisy datasets [49].
As a computationally efficient alternative to LSTM networks, the GRU architecture simplifies recurrent modeling by eliminating the separate memory cell and employing fewer gating mechanisms [50]. This structural simplification significantly reduces computational complexity, accelerates training convergence, and enhances adaptability to short-term temporal dependencies, making GRUs particularly well suited for real-time UAV trajectory prediction. Owing to their balanced trade-off between predictive accuracy and computational efficiency, GRUs are highly effective in applications that demand rapid responses to dynamic environmental conditions and intermittent signal availability. Accordingly, this study adopts a GRU-based framework to enable proactive, multi-step-ahead UAV flight path prediction. By learning temporal dependencies from sequential flight telemetry, the proposed model forecasts future UAV trajectories over multiple time steps, allowing early detection of potential deviations and supporting smoother, safer, and more energy-efficient autonomous navigation.
A persistent challenge in UAV operations is the reliance on uninterrupted Global Navigation Satellite System (GNSS) signals. In real-world environments, factors such as urban canyons, dense vegetation, and electromagnetic interference frequently cause temporary signal degradation or complete loss. During such periods, UAVs must depend on predictive models to estimate future states and maintain trajectory continuity. While Transformer-based architectures have demonstrated strong performance in long-sequence modeling, their quadratic computational complexity and substantial memory requirements often exceed the capabilities of resource-constrained onboard embedded systems. In contrast, GRU networks provide a superior balance between computational efficiency and predictive performance, making them more suitable for real-time inference on commercial survey UAV platforms. For this reason, the present work prioritizes GRU-based modeling to achieve high-fidelity trajectory prediction under strict latency and hardware constraints.
Furthermore, this study emphasizes the importance of multi-step-ahead forecasting for robust trajectory prediction. Single-step-ahead prediction, which estimates only the immediate next state, is often insufficient for practical UAV operations, as minor prediction errors can accumulate over time and lead to significant trajectory drift. In contrast, multi-step-ahead forecasting predicts successive future positions without intermediate ground-truth updates, thereby providing a more stringent and realistic evaluation of model robustness. This forecasting paradigm forces the model to rely entirely on learned temporal dynamics, revealing whether prediction errors remain bounded or propagate over extended horizons. Consequently, multi-step-ahead forecasting provides deeper insight into long-term stability, generalization capability, and resilience under real-world operating conditions in which continuous state feedback is not guaranteed.
Building on prior work that applied deep learning to static image analysis, the present study advances UAV autonomy by introducing a real-time, four-dimensional spatiotemporal trajectory-prediction framework that explicitly models position and time (X, Y, Z, T). Unlike traditional two-dimensional path-planning approaches, this framework incorporates altitude and temporal progression as primary features, thereby enabling accurate modeling of three-dimensional motion and flight pacing. Moreover, unlike agility-focused models that rely primarily on velocity estimates for rapid maneuvering [38], or control-centric frameworks that optimize actuator-level behavior [36], the proposed approach explicitly models time to preserve the strict temporal cadence required for sensor triggering in photogrammetric survey missions. By accounting for both spatial and temporal constraints, the GRU-based model ensures consistent data acquisition and trajectory integrity, even during GNSS interruptions. Finally, this study systematically analyzes error propagation characteristics across recurrent architectures to identify the most resilient framework for maintaining trajectory stability under simulated signal-loss conditions, thereby directly addressing a critical limitation in existing UAV navigation systems.

2. Data Collection and Method and Methodology

This study employs UAV-based data acquisition in the Wufeng District of Taichung to develop a robust deep learning framework for real-time trajectory prediction. Two dedicated UAV missions were conducted. The first mission consisted of systematic, time-series UAV flights to capture historical geospatial coordinates (X, Y, Z) along with temporal information. The second mission focused on high-resolution visual documentation of rice-loading operations to support downstream analysis of applications. As illustrated in Figure 1, the UAV flight paths followed a lawnmower-pattern grid, with altitude variations encoded by color to ensure comprehensive spatial coverage and facilitate topographic differentiation. The underlying satellite imagery provides contextual information on the rural–urban mosaic of the study area, highlighting both the operational complexity and the practical relevance of the dataset. Figure 2 and Figure 3 further supplement the dataset with visual evidence of agricultural activities. The collected trajectory sequences were subsequently curated and partitioned into training and testing subsets, forming a solid foundation for effective trajectory learning and generalization across unseen flight patterns.

2.1. Data Collection and Preprocessing

Prior to model training, the collected datasets underwent meticulous preprocessing to ensure data quality and model convergence. Data acquisition was performed using a DJI Mavic 3 Multispectral, utilizing its integrated GNSS receiver/antenna for positioning. The processed UAV trajectory dataset aggregates telemetry from two distinct mission flights, capturing the complete operational timeline. The first flight duration spans from 0 to 4232 s, while the second flight duration covers 0 to 4288 s. Throughout these missions, the corresponding three-dimensional spatial coordinates (X, Y, and Z) were recorded in synchronization with the onboard camera, which acquired images at a fixed interval of 2 s. This systematic data collection yielded a consolidated total of 4262 samples, forming the foundation for the subsequent trajectory analysis and model training. To ensure robust evaluation, this dataset was partitioned into training (80%) and test (20%) sets. To prevent data leakage, all coordinate values were normalized using Min-Max scaling, fitted exclusively to the training data. Temporal sequences were constructed using a sliding-window approach with a look-back window of 6-time steps to preserve spatiotemporal continuity, and redundant attributes were removed to optimize model efficiency.
Table 1 presents a detailed statistical summary of the processed UAV trajectory dataset. The dataset exhibits well-distributed spatiotemporal characteristics throughout the entire flight duration. The temporal variable ranges from 0 to 8520 s, with a mean and median of 4260 s, indicating symmetric temporal sampling. Spatially, the UAV operates within a confined yet sufficiently diverse area, with Easting (X) coordinates spanning approximately 3.0 km and Northing (Y) coordinates covering about 5.1 km. The larger standard deviation in the Northing direction (1404.6 m) than in the Easting direction (868.1 m) suggests greater longitudinal movement during flight operations. Altitude values remain relatively stable, with a mean of 191.6 m and a standard deviation of 12.0 m, reflecting controlled vertical motion typical of surveillance and mapping missions. Overall, the dataset represents a balanced trajectory profile with consistent altitude control and substantial horizontal displacement, making it well-suited for UAV navigation analysis and trajectory modeling.
The proposed framework adopts a GRU architecture, selected for its computational efficiency and its strong ability to model the sequential dependencies inherent in UAV motion dynamics. The model is trained to predict future spatial coordinates using Mean Squared Error (MSE) as the primary loss function. Predictive performance is comprehensively evaluated using multiple statistical metrics, including Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), coefficient of determination (R2), Mean Absolute Percentage Error (MAPE), and 95% prediction intervals (PI) for uncertainty quantification. To assess operational scalability, the framework is deployed in both cloud and edge computing environments. The cloud platform supports large-scale training, scenario simulation, and integration with mission-planning backends, while the edge implementation is optimized for real-time inference on resource-constrained UAV platforms. A domain-specific case study on rice lodging further demonstrates the framework’s practical applicability, showing improvements in image alignment, digital surface model generation, and orthomosaic reconstruction fidelity. Finally, the model is fine-tuned using five-fold Bayesian optimization and trained on a high-performance workstation to ensure robustness, adaptability, and high predictive precision, ultimately enabling intelligent and autonomous UAV navigation for precision agriculture and site analytics.

2.2. Development of Prediction Models

Recurrent Neural Networks (RNNs) were employed in this study for time-series data analysis due to their inherent feedback mechanism, which enables effective learning of temporal dependencies. However, traditional RNNs struggle with long-term dependencies because of vanishing and exploding gradients. To address this, Hochreiter and Schmidhuber (1997) introduced the LSTM network, which uses gating mechanisms to retain information over extended sequences [49]. LSTMs have become state-of-the-art for time-series prediction, demonstrating high accuracy, though they can be computationally intensive and prone to overfitting. The GRU, a streamlined variant of LSTM, simplifies the architecture by eliminating separate memory cells, allowing faster training and inference while maintaining comparable accuracy [50]. However, GRUs may be less effective at capturing complex long-term dependencies. To balance these trade-offs, this study proposes a hybrid model that integrates the long-term memory capability of LSTM with the computational efficiency of GRU to enhance prediction robustness and performance in UAV trajectory modeling [49].

2.2.1. Long Short-Term Memory (LSTM)

Hochreiter and Schmidhuber’s introduction of the LSTM network substantially improved neural networks’ ability to capture long-range dependencies in sequential data [47]. As an advanced variant of the RNN, LSTM addresses common issues such as vanishing and exploding gradients by incorporating specialized gating mechanisms and memory cells. The model employs backpropagation through time to compute optimization gradients, enabling proportional adjustments to network weights based on error derivatives. Its core innovation lies in the memory cell, which allows information to be retained over extended time steps. Structurally, an LSTM comprises four interacting layers, with the cell state serving as the central component for information storage and flow. Three of these layers, which comprise sigmoid-based gates, regulate the cell state by determining which information to retain, update, or discard. The sigmoid activation outputs values between 0 and 1, where 0 represents complete blockage and 1 indicates whole passage, thus precisely controlling information flow across time steps [48].
The LSTM workflow comprises three key steps. In the initial step, the ‘forget gate’, a sigmoid layer, determines the information to be discarded from the cell state by evaluating the previous time steps h t 1 , and the current input, xt. The output, ranging between 0 and 1, signifies the proportion of information to forget and is mathematically defined by Equation (1):
f t = σ ( w t . h t 1 , x t + b f )
In the subsequent step, the output gate, denoted as it, uses a sigmoid function to decide the values to update, while a tanh layer generates a new candidate vector, Ct, that could be added to the cell state. The Equations (2) and (3) for these steps are
i t = σ ( w i × h t 1 , x t + b i )
C t = σ ( w c × h t 1 , x t + b c )
The next phase involves updating the old cell state, Ct−1, into the new cell state, Ct, considering the forget gate, ft, the input gate, it, and the candidate values, Ct, with the mathematical Equation (4):
C t = f t × ( C t 1 + i t     C t )
The final step involves determining the system’s output from a filtered version of the cell state. A sigmoid layer selects relevant parts of the cell state for presentation, followed by the application of the tanh function to convert values within the range of −1 to 1. The final output is obtained by multiplication, as depicted in two Equations (5) and (6):
o t = σ ( w o × h t 1 , x t + b o )
h t = o t × t a n h ( C t )

2.2.2. Gated Recurrent Unit (GRU)

In 2014, Chung et al. introduced the GRU, which has since emerged as a highly effective architecture within the family of RNNs. The GRU was designed to mitigate the vanishing gradient problem commonly encountered in standard RNNs by introducing a simplified yet efficient gating mechanism. Structurally, GRU is a streamlined variant of the LSTM network, incorporating only two primary gates, the update gate and the reset gate, which jointly regulate the flow of information and enable the model to capture sequential dependencies effectively [48]. Unlike the LSTM, GRU does not include a separate memory cell; instead, it manages information directly within its hidden state [49].
The update gate determines how much of the past information ht−1 should be retained and how much new information from the current input xt should be incorporated, using a sigmoid activation function to control this balance. The reset gate governs the extent to which previous memory is forgotten, facilitating adaptability in dynamic temporal contexts. Together, these gates not only alleviate the vanishing-gradient problem but also improve computational efficiency and accelerate convergence. The internal tanh activation layer further constrains the range of candidate hidden-state values propagated through the network. The mathematical formulation of the update gate is expressed as Equation (7):
z t = σ ( w z × [ h t 1 , x t ] )
This combines weighted values xt and ht−1 with a sigmoid activation, information flow within a range of 0 to 1. Its function is pivotal in transmitting information from past timesteps to future ones, thereby enhancing the model’s ability to capture dependencies in sequential data.
The reset gate, employing a sigmoid activation function, decides the proportion of past information ht−1 and current input xt to discard is calculated using the following Equation (8):
r t = σ ( w r × [ h t 1 , x t ] )
The computation initiates by multiplying xt and ht−1 with their respective weights and summing the results. Subsequently, a sigmoid activation is applied to constrain the output to the [0, 1] range. The role of the reset gate is pivotal in guiding the model in determining the extent to which past information should be disregarded.
Current memory content integrates the reset gate and introduces fresh memory content. The new memory content is determined by multiplying the input xt with its weight, element-wise multiplication with the reset gate rt and the previous output ht−1, and applying a tanh function to pass relevant past information. The mathematical Equation (9) is as follows:
h t = t a n h ( w × [ h t 1 , x t ] )
The final memory at the current time step, held within the ht vector, is pivotal in passing information down the network. The update gate zt plays a critical role in balancing relevant information retention ht and past contribution ht−1. When zt approaches 0, emphasis is on the current content, while nearing 1 signifies the retention of past information, calculated as Equation (10):
h t = ( 1 z t × [ h t 1 + z t × h t ] )

2.2.3. Forecasting Strategy: Multi-Step-Ahead Prediction

To rigorously assess model resilience and predictive accuracy, this study employs a direct multi-step forecasting strategy. The model is configured to output a contiguous sequence of future time steps based on a historical input window of observations. Unlike recursive strategies, which are prone to cumulative error propagation by iteratively feeding single-step predictions back into the model, the direct approach learns temporal dependencies across the entire forecast horizon simultaneously. This mechanism enables joint optimization of spatiotemporal correlations, thereby significantly reducing forecasting drift. Crucially, this methodology mirrors the operational demands of autonomous UAVs, in which the ability to estimate future trajectories without continuous corrective feedback reliably is vital for maintaining navigation stability and mission continuity under intermittent data availability.

2.3. Model Configuration and Training

To ensure experimental reproducibility and comparative rigor, the GRU, LSTM, and RNN models were implemented using a standardized architectural framework. To ensure optimal model performance, Bayesian Optimization was implemented using the Optuna framework. The optimization process consisted of 20 trials aimed at minimizing the MSE on the validation set. This process automatically tuned key hyperparameters, including the learning rate, batch size, and dropout rates. The input layer is configured to process a look-back window of 6-time steps, while the model is trained to predict a forecast horizon of 5 steps ahead. The core architecture comprises a single recurrent layer with 64 hidden units employing the hyperbolic tangent (tanh) activation function. Following this, a fully connected Dense output layer projects the hidden states onto the four-dimensional coordinate space (X, Y, Z, and Time). Model training was conducted over 200 epochs with a batch size of 32, utilizing the Adam optimizer with an initial learning rate of 0.001. To prevent overfitting and ensure robustness across diverse flight phases (e.g., linear motion vs. rapid maneuvering), model fine-tuning was performed using a 5-fold cross-validation scheme guided by Bayesian optimization. This approach enabled efficient exploration of the hyperparameter space to identify a configuration that balances predictive accuracy with the low computational complexity required for edge deployment. The optimization search space included network depth, number of hidden units, look-back window size, and learning rate, to minimize the validation Mean Squared Error (MSE). The resulting optimal configuration is detailed in Table 2.
To validate the model’s suitability for real-time operations, we analyzed its computational complexity and inference latency. The proposed GRU architecture is designed to be lightweight, consisting of a single recurrent layer with 64 hidden units. The model comprises 42,830 parameters, resulting in a model size (weights only) of approximately 55.8 KB and a total memory footprint (including optimizer states) of roughly 167.3 KB. Inference latency was evaluated on a high-performance workstation equipped with an Intel® Xeon® W5-3425 CPU @ 3.19 GHz, 192 GB RAM (Hewlett-Packard, Palo Alto, USA) using a batch size of one to emulate the sequential arrival of sensor data during flight. Under these conditions, the model achieved an average inference latency of 92.42 ms per prediction step. While this study used a workstation for benchmarking, the compact parameter size and low memory footprint indicate that the architecture is well-suited for deployment on resource-constrained embedded systems (e.g., NVIDIA Jetson or Raspberry Pi) commonly used in UAVs. The current throughput (~10.8 Hz) is sufficient for trajectory planning and navigation tasks, which typically operate at lower frequencies than attitude control loops.

2.4. Performance Evaluation Metrics

Effective model evaluation is essential in predictive analytics to determine data fit, compare predictive accuracy, and assess consistency across models. This study employs several performance metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Coefficient of Variation of RMSE (CV-RMSE), Relative Standard Error (RSE), Overfitting Factor (OF), and Prediction Interval (PI). Additionally, the Coefficient of Determination (R2) and Standard Error Ratio (SER), defined as the ratio of the standard error of the estimate to the standard deviation of the observed values (Se/Sy), are calculated for each model to assess explanatory power and predictive reliability. These evaluation metrics are computed using Equations (11)–(17), respectively.
M A E = 1 N i = 1 N y ^ i y i
M A P E = 100 % N i = 1 N y ^ i y i y i
R M S E = 1 N i = 1 N y ^ i y i 2 1 / 2
C V R M S E = R M S E y ¯ × 100 %
R 2 = 1 n = 1 N y i y i ^ 2 n = 1 N y i y ¯ 2
S E R = R M S E σ y
R S E = i = 1 N y ^ i y i 2 i = 1 N ( y i y ¯ ) 2
where yi is the actual observed value; y ^ i is the predicted value; y ¯ i is the mean of the observed values; N is the total number of observations; σy is the standard deviation of the observed values.
Finally, to quantify the uncertainty of the model’s forecasts, the 95% Prediction Interval (PI) was calculated using a residual-based Gaussian approximation. This metric assumes that the prediction errors are normally distributed and provides a bound within which future observations are expected to fall with 95% probability. It is computed using Equation (18).
P I 95 ± 1.96 σ
where Sigma is computed over the evaluation set. This provides a practical bound within which future observations are expected to fall with approximately 95% probability under the Gaussian-error assumption.

3. Results

The predictive performance of the RNN, LSTM, and Gated Recurrent Unit (GRU) models was systematically evaluated in two distinct phases. Initially, single-step-ahead prediction was performed to establish a baseline for instantaneous forecasting accuracy. Subsequently, a more rigorous multi-step-ahead forecast was conducted to critically assess the models’ robustness and susceptibility to cumulative error propagation over a defined temporal horizon.

3.1. Single-Step-Ahead Forecasting Performance

The performance of LSTM and GRU architectures for single-step UAV trajectory forecasting is detailed in Table 3 and Table 4 and Figure 4. These results collectively demonstrate the GRU model’s notable stability and generalization capability. Table 3 indicates that during the training phase, all three RNN architectures, LSTM, GRU, and RNN, effectively captured the temporal dependencies present in historical UAV flight data. Among them, the GRU model achieved the most favorable performance, with the lowest MAE (0.0019), RMSE (0.0040), and MAPE (1.5802), and the highest R2 (0.9998). These metrics suggest an exceptionally accurate fit to the training data. Additionally, GRU exhibited the lowest RMSE 0.9349 and the narrowest PI (±0.007), indicating minimal residual variability and high predictive confidence. The LSTM model performed closely, with slightly higher errors (RMSE = 0.0048, MAPE = 2.0981), yet it maintained an excellent R2 of 0.9997. Meanwhile, the standard RNN achieved an R2 of 0.9993 but showed relatively weaker performance in other metrics, including an RMSE of 0.0070 and the widest PI (±0.013), suggesting comparatively fewer stable predictions. All three models maintained low SER and RSE, confirming strong fitting precision.
The real test of model robustness lies in generalization to unseen data, as shown in Table 4. Here, the GRU model again outperformed its counterparts, retaining the lowest MAE (0.0036), MSE (0.00003), and RMSE (0.0054), while maintaining a high R2 value of 0.9923, slightly above LSTM’s 0.9900 and RNN’s 0.9885. These outcomes validate GRU’s ability to sustain high predictive accuracy under real-world testing conditions. It also achieved the lowest CV-RMSE (0.7484%) and the tightest PI (±0.0079), reinforcing its consistency and reliability. Although LSTM demonstrated slightly higher MAE (0.0051) and MAPE (5.059), its performance remained competitive, albeit with a broader PI (±0.010). In contrast, the RNN model exhibited weaker generalization, with higher error values (MAE = 0.0050, RMSE = 0.0067) and a wider PI (±0.0087), underscoring its relatively limited predictive stability compared to GRU.
Figure 5 Residual distribution analysis comparing the LSTM, GRU, and RNN models across the spatial (X, Y, Z) and temporal dimensions. The GRU model consistently demonstrates the most favorable statistical characteristics across the spatial axes (X, Y, Z), with residuals sharply concentrated around zero, signifying minimal bias and variance: in the X-direction, GRU exhibits superior precision with a high peak near 0.005, while LSTM is broader and multi-modal and RNN is the widest and most asymmetric; along the Y-axis, GRU again maintains the highest and most centralized peak near −0.002, whereas LSTM shows the broadest, left-skewed dispersion, suggesting systematic underestimation; and in the Z-direction, GRU yields the narrowest and most centered distribution, confirming consistent high altitude accuracy. However, in the Time dimension, the LSTM model exhibits the sharpest peak, closest to zero, indicating the best temporal precision. In contrast, GRU and RNN show broader, positively shifted curves, suggesting greater error dispersion and a slight underestimation bias in time prediction. Collectively, despite LSTM’s advantage in the Temporal dimension, the GRU model’s consistently higher precision across all three spatial dimensions (X, Y, Z) positions it as the most suitable candidate for reliable real-time UAV trajectory forecasting.
The analysis of Residuals vs. Actual Values (Figure 6) critically evaluates the LSTM, GRU, and RNN models across the X, Y, Z, and Time dimensions. In the X-direction, all models exhibit positive heteroscedasticity. Still, GRU and LSTM maintain tighter grouping near the zero line than the RNN, which shows the greatest scatter and suggests a slight positive bias (underestimation) at higher values. For the Y-direction, a significant contrast is evident: GRU demonstrates enhanced predictive accuracy with residuals tightly clustered and centered on the zero line; in contrast, the LSTM displays a significant negative bias (median well below zero) with high scatter, indicating systematic overestimation, and the RNN shows a moderate negative bias. In the Z-direction, all models generally maintain residuals close to zero. Still, the GRU model exhibits the tightest concentration, with the least scatter and bias, thereby affirming its precision relative to the broader fluctuations of LSTM and RNN. In the Time dimension, the residuals display a distinct sawtooth pattern, indicating difficulty in predicting segment transitions, but the GRU and RNN models show slightly tighter clustering and lower amplitude fluctuations around zero compared to the LSTM, which has greater negative scatter. Overall, due to its consistently tight, centered residual distributions in the X, Y, Z, and Time components, the GRU model demonstrates the most robust and accurate performance across the full spatial-temporal trajectory.
The violin plots in Figure 7 offer a concise comparative assessment of residual error distributions for the LSTM, GRU, and RNN models across UAV trajectory dimensions X, Y, Z, and Time. The GRU model consistently demonstrates the most compact, symmetric residual density centered closest to zero across the spatial axes (X, Y, Z), signifying strong spatial generalization and minimal bias. Specifically, in the X-direction, GRU is the narrowest, while LSTM is the broadest; for the Y-direction, the LSTM shows the widest and most pronounced left-skew (median around −0.007), indicating systematic underestimation, while GRU is the most centralized and narrowest; and in the Z-direction, GRU yields the narrowest and most centered distribution. Conversely, in the Time dimension, the LSTM exhibits superior temporal precision with the narrowest distribution and median closest to zero, while GRU and RNN are broader and positively shifted, suggesting a tendency toward underestimation in time. Collectively, despite LSTM’s advantage in time prediction, the GRU model’s consistent superior precision across all three spatial components (X, Y, Z) positions it as the most reliable architecture for accurate overall UAV trajectory forecasting.
The residual histogram analysis with Gaussian fits in Figure 8 provides a detailed assessment of the LSTM, GRU, and RNN models’ predictive accuracy across the X, Y, Z, and Time dimensions by comparing their residual means (μ) and standard deviations (σ) to an ideal normal distribution. The GRU model consistently demonstrates enhanced predictive accuracy with the lowest standard deviation (σ) in the spatial dimensions (X, Y, Z), indicating the least variance and highest predictive precision. In the X-direction, GRU (σ = 0.0024) is the tightest, while LSTM (σ = 0.0049) is the broadest, and all models show a slight positive mean error (under-prediction). For the Y-direction, the GRU (σ = 0.0036 and μ = −0.0012) is the most tightly centered and symmetric, whereas the LSTM (σ = 0.0052 and μ = −0.0085) has a significantly larger negative mean, confirming its systematic underestimation. In the Z-direction, all models exhibit low standard deviations, but the GRU (σ = 0.0064, μ = −0.0002) has the lowest variance and mean error. Crucially, in the Time dimension, the LSTM model exhibits the lowest standard deviation (σ = 0.0012) and lowest mean error (μ = 0.0017), indicating superior temporal precision compared to GRU (σ = 0.0020 and μ = 0.0058) and RNN (σ = 0.0021 and μ = 0.0025). Collectively, while the LSTM excels in time prediction, the GRU model’s tightly clustered, low-variance residuals across the X, Y, and Z spatial axes establish it as the most robust and reliable architecture for precise, real-time UAV path prediction in the core spatial environment.
Figure 9 shows the MSE loss curves for the GRU model over 100 epochs, indicating rapid convergence within the first 10 epochs and stable, near-zero loss thereafter. The close alignment between training and validation losses reflects strong generalization with no signs of overfitting. This smooth, low-variance convergence confirms the GRU’s ability to learn robust spatiotemporal patterns and make precise UAV trajectory predictions, highlighting its reliability and effectiveness for real-time path forecasting tasks.
Based on the results, the GRU model consistently outperforms both the LSTM and the RNN in modeling UAV flight dynamics, demonstrating superior accuracy and generalization. Quantitative metrics confirm that GRU achieves the lowest error rates and the highest R2 across all phases. Qualitative analyses (KDEs, scatter plots, violin plots, and histograms) indicate that GRU yields the most tightly clustered, symmetric, and approximately Gaussian residuals across all spatial dimensions. In contrast, LSTM exhibits moderate variability, and RNN shows the highest error dispersion and instability. Overall, GRU is the most robust and reliable model for real-time, high-precision UAV trajectory prediction in dynamic environments.
Figure 10, Figure 11, Figure 12 and Figure 13 illustrate the GRU model’s prediction accuracy for UAV trajectory components X, Y, Z, and Time on the test dataset, demonstrating its robust spatiotemporal modeling capabilities. Figure 10 shows that the GRU effectively captures directional changes along the X-axis, with near-perfect alignment during both linear and nonlinear transitions, indicating strong generalization in horizontal positioning. Figure 11 shows similarly precise predictions on the Y-axis, with the model accurately tracking the ground truth through smooth ascent and descent, confirming its reliability in modeling lateral displacement. In Figure 12, the GRU maintains close phase alignment with the accurate Z-axis data despite the inherent nonlinear and oscillatory nature of altitude variation, demonstrating its ability to capture vertical dynamics. Figure 13 shows that the GRU model accurately tracks the actual Time values, with minimal divergence over the test period, confirming strong temporal fidelity. Across all components, the GRU model exhibits minimal deviation from the actual values, underscoring its suitability for accurate, real-time UAV trajectory forecasting in complex spatial environments.
Figure 14 presents a 3D visualization of the GRU-predicted path versus the accurate UAV trajectory across the X, Y, and Z coordinates, definitively highlighting the model’s robust spatiotemporal fidelity. The predicted path closely aligns with the ground truth in both horizontal and vertical dimensions, demonstrating accurate modeling of complex maneuvers and continuous altitude variations. The GRU effectively maintains trajectory continuity and phase alignment, preserving the sequence and magnitude of changes with minimal spatial deviation. This high spatial fidelity confirms the GRU’s superior robustness in learning nonlinear UAV motion, establishing it as the most capable architecture for real-time, high-precision trajectory forecasting.

3.2. Multi-Step-Ahead Forecasting Performance

Table 5 presents the predictive performance of the GRU model for multi-step-ahead UAV path forecasting. Given the data sampling interval equal to 2 s, the five prediction steps correspond to look-ahead times of 2, 4, 6, 8, and 10 s, respectively. The results demonstrate that the GRU achieves high accuracy and stability, with an R2 of 0.998 across all five prediction steps, highlighting its strong explanatory power and excellent fit to the data. This high precision is further supported by minimal absolute errors, with MAE peaking at 0.009 and RMSE at 0.016. Moreover, the model exhibits robustness to forecast horizon degradation, as both the key error metrics and the PI increase only slightly, ensuring reliable predictions up to the full 10 s horizon. While the MAPE values remain moderate (5–7%), the exceptionally low RSE, consistently below 0.25%, indicates that the prediction errors are negligible compared to the natural variability of the UAV path. Taken together, these results confirm that the GRU is a highly dependable architecture for both real-time and anticipatory UAV path planning.
Figure 15, Figure 16 and Figure 17 collectively demonstrate the GRU model’s robustness in multi-step-ahead UAV trajectory forecasting, where predictive accuracy is preserved despite the inherent challenges of error propagation. As shown in Figure 15, the model consistently achieves an R2 above 0.998 across all forecast horizons, with MAE and RMSE increasing slightly and peaking at step 3 (MAE ≈ 0.009; RMSE ≈ 0.016) before stabilizing, indicating controlled error growth. Figure 16 highlights that although MAPE fluctuates between ~5–7% due to sensitivity to actual value magnitudes, CV-RMSE remains consistently low (<2.2%), confirming that absolute errors are negligible relative to the trajectory scale. Figure 17 further underscores the model’s reliability, with the Overfitting Factor remaining below 0.02% and the Prediction Interval widening modestly from ±0.017 to ±0.024, reflecting a well-calibrated increase in uncertainty. Taken together, these findings affirm that the GRU not only delivers high precision and stable generalization but also provides bounded confidence intervals, making it particularly suitable for anticipatory UAV path planning and safe autonomous navigation.
Figure 18 illustrates the GRU model’s multi-step trajectory prediction across a 5-step horizon, confirming its exceptional capability for spatiotemporal modeling of the UAV’s flight path, with high fidelity maintained even at the furthest forecast step. The model excels at capturing the essential dynamics of the trajectory: for the horizontal plane, it exhibits strong trend stability by precisely tracking the linear, step-wise movement in X and demonstrating remarkable synchronization with the high-frequency periodic oscillations of Y. Furthermore, the model achieves near-perfect prediction of the monotonically increasing Time(S) component, validating the integrity of its learned sequential dependencies. The only noticeable degradation occurs in the highly volatile Z component, where accumulated uncertainty causes slightly larger prediction errors in turbulent segments towards T + 5; however, given the overall accuracy across the other three variables, the model provides a robust and highly dependable forecast of the UAV’s future state.

4. Conclusions

This study introduced and validated a GRU-based framework for real-time, multi-step-ahead UAV flight path prediction. The primary goal was to develop a computationally efficient and highly accurate model for real-time trajectory forecasting. Through systematic evaluation against LSTM and traditional RNN architectures, the GRU model demonstrated clear superiority.
Empirical results highlight the exceptional performance of the GRU network. In single-step predictions, it achieved the lowest error metrics, with a MAE of 0.0036, RMSE of 0.0054, R2 of 0.9923, MAPE of 1.7494, CV-RMSE of 0.7484, SER of 0.0170, RSE of 0.028, OF of 0.003, and PI of ±0.0079 on unseen test data. More importantly, in multi-step-ahead forecasting designed to reflect real-world challenges, the GRU model showed remarkable stability and resilience to error accumulation. It maintained near-perfect accuracy (R2 = 0.998) with minimal increases in error metrics across a five-step forecast horizon.
The findings carry significant practical implications for autonomous UAV operations. The GRU framework provides a reliable and efficient solution for real-time UAV trajectory prediction, enhancing autonomous navigation, decision-making in dynamic environments, and high-fidelity data collection for tasks such as photogrammetry. Employing multi-step-ahead forecasting meets the stringent stability requirements of photogrammetric UAV operations. By maintaining continuous trajectory estimates during signal interruptions, this stability preserves consistent image overlap and geometric integrity, which are critical for artifact-free orthomosaic generation and reliable 3D reconstruction. As a result, the proposed framework enhances the robustness of autonomous data acquisition and supports high-quality spatial analysis in operationally challenging environments.
This study confirms the GRU model’s effectiveness using a comprehensive dataset. However, future research should further test its generalizability across diverse environments, weather conditions, and UAV platforms. This study demonstrates the strong predictive capability of the GRU model. Future work should assess its generalizability across diverse environments, weather conditions, and UAV platforms, and benchmark it against Transformer-based architectures with self-attention. Emphasis should be placed on the trade-offs between predictive accuracy and computational latency under real-time onboard UAV constraints, as well as performance in linear-motion scenarios where classical models remain competitive. Furthermore, while this study validates the computational efficiency of the GRU architecture via workstation benchmarking, future work will focus on deploying and testing the model directly on onboard embedded hardware to quantify energy consumption and latency in operational flight scenarios. Overall, this work establishes the GRU network as a powerful and practical tool for advancing the autonomy and reliability of UAV systems.

Author Contributions

Y.B.K.: Conceptualization, Methodology, Software, Validation, Formal analysis, Visualization, Writing—original draft, Writing—review and editing. M.-D.Y.: Conceptualization, Methodology, Resources, Software, Data curation, Validation, Formal analysis, Investigation, Visualization, Project administration, Funding acquisition, Writing—original draft, Writing-review and editing. H.D.S.: Investigation, Data curation, Visualization, and Writing—review and editing. H.-H.T.: Data curation, Validation, Visualization, Resources, and Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly funded by the National Science and Technology Council, Taiwan, under Grant Number NSTC 113-2634-F-005-002 and NSTC 113-2121-M-005-002.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Huang, S.; Chen, W.; Lu, B.; Xiao, F.; Shen, C.; Zhang, W. An improved BAT algorithm for collaborative dynamic target tracking and path planning of multiple UAV. Comput. Electr. Eng. 2024, 118, 109340. [Google Scholar] [CrossRef]
  2. Alsamhi, S.H.; Shvetsov, A.V.; Kumar, S.; Hassan, J.; Alhartomi, M.A.; Shvetsova, S.V.; Hawbani, A. Computing in the sky: A survey on intelligent ubiquitous computing for UAV-assisted 6G networks and industry 4.0/5.0. Drones 2022, 6, 177. [Google Scholar] [CrossRef]
  3. Floreano, D.; Wood, R.J. Science, technology and the future of small autonomous drones. Nature 2015, 521, 460–466. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, C.; Chen, S.; Hu, G.; Chen, B.; Chen, P.; Su, K. An auto-landing strategy based on pan-tilt based visual servoing for unmanned aerial vehicle in GNSS-denied environments. Aerosp. Sci. Technol. 2021, 116, 106891. [Google Scholar] [CrossRef]
  5. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl.-Based Syst. 2018, 158, 54–64. [Google Scholar] [CrossRef]
  6. Motlagh, O.; Nakhaeinia, D.; Tang, S.H.; Karasfi, B.; Khaksar, W. Automatic navigation of mobile robots in unknown environments. Neural Comput. Appl. 2014, 24, 1569–1581. [Google Scholar] [CrossRef]
  7. Bo, L.; Zhang, T.; Zhang, H.; Hong, J.; Liu, M.; Zhang, C.; Liu, B. 3D UAV path planning in unknown environment: A transfer reinforcement learning method based on low-rank adaption. Adv. Eng. Inform. 2024, 62, 102920. [Google Scholar] [CrossRef]
  8. Yang, C.Y.; Yang, M.D.; Tseng, W.C.; Hsu, Y.C.; Li, G.S.; Lai, M.H.; Lu, H.Y. Assessment of rice developmental stage using time series UAV imagery for variable irrigation management. Sensors 2020, 20, 5354. [Google Scholar] [CrossRef]
  9. Yang, M.D.; Hsu, Y.C.; Chen, Y.H.; Yang, C.Y.; Li, K.Y. Precision monitoring of rice nitrogen fertilizer levels based on machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2025, 237, 110523. [Google Scholar] [CrossRef]
  10. Yang, M.D.; Hsu, Y.C.; Tseng, W.C.; Tseng, H.H.; Lai, M.H. Precision assessment of rice grain moisture content using UAV multispectral imagery and machine learning. Comput. Electron. Agric. 2025, 230, 109813. [Google Scholar] [CrossRef]
  11. Munawar, H.S.; Ullah, F.; Qayyum, S.; Khan, S.I.; Mojtahedi, M. UAVs in disaster management: Application of integrated aerial imagery and convolutional neural network for flood detection. Sustainability 2021, 13, 7547. [Google Scholar] [CrossRef]
  12. Daud, S.M.S.M.; Yusof, M.Y.P.M.; Heo, C.C.; Khoo, L.S.; Singh, M.K.C.; Mahmood, M.S.; Nawawi, H. Applications of drone in disaster management: A scoping review. Sci. Justice 2022, 62, 30–42. [Google Scholar] [CrossRef] [PubMed]
  13. Feng, J.; Zhang, J.; Zhang, G.; Xie, S.; Ding, Y.; Liu, Z. UAV dynamic path planning based on obstacle position prediction in an unknown environment. IEEE Access 2021, 9, 154679–154691. [Google Scholar] [CrossRef]
  14. Ahmadi, B.; Liu, G. Enhanced dynamic obstacle avoidance for UAVs using event camera and ego-motion compensation. Drones 2025, 9, 745. [Google Scholar] [CrossRef]
  15. Chen, J.; Liu, X.; Sheng, G.; Shao, Q.; Zhao, B. A general path planning algorithm with soft constraints for UAVs in high-density and large-sized obstacle scenarios. Drones 2025, 9, 793. [Google Scholar] [CrossRef]
  16. Veiga-Piñeiro, G.; Aldao-Pensado, E.; Martín-Ortega, E. Hybrid computational-fluid-dynamics deep-learning approach for urban wind flow predictions and risk-aware UAV path planning. Drones 2025, 9, 791. [Google Scholar] [CrossRef]
  17. Xiong, Y.; Tian, H.; Tang, J.; Jin, J.; Shen, X. Task planning and optimization for multi-region multi-UAV cooperative inspection. Drones 2025, 9, 762. [Google Scholar] [CrossRef]
  18. Yue, J.; Yu, K.; Wang, B.; Zhao, D.; Liu, T.; Shen, C. Cooperative path-following control for multi-UAVs considering GNSS denial. Drones 2025, 9, 749. [Google Scholar] [CrossRef]
  19. Zhang, W.; Chen, G.; Yang, Z. Auction- and pheromone-based multi-UAV cooperative search and rescue in maritime environments. Drones 2025, 9, 794. [Google Scholar] [CrossRef]
  20. Boulares, M.; Barnawi, A. A novel UAV path planning algorithm to search for floating objects on the ocean surface based on object’s trajectory prediction by regression. Robot. Auton. Syst. 2021, 135, 103673. [Google Scholar] [CrossRef]
  21. Tang, R.; Ng, K.K.; Li, L.; Yang, Z. A learning-based interacting multiple model filter for trajectory prediction of small multirotor drones considering differential sequences. Transp. Res. Part C Emerg. Technol. 2025, 174, 105115. [Google Scholar] [CrossRef]
  22. Wang, K.; Hui, M.; Hou, J.; Song, X. Deep Reinforcement Learning-Based UAV Path Planning Algorithm. In Proceedings of the 2024 3rd International Conference on Artificial Intelligence and Computer Information Technology (AICIT), Yichang, China, 20–22 September 2024; IEEE: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
  23. Chai, X.; Zheng, Z.; Xiao, J.; Yan, L.; Qu, B.; Wen, P.; Sun, H. Multi-strategy fusion differential evolution algorithm for UAV path planning in complex environment. Aerosp. Sci. Technol. 2022, 121, 107287. [Google Scholar] [CrossRef]
  24. Shao, X.; Liu, H.; Zhang, W.; Zhao, J.; Zhang, Q. Path driven formation-containment control of multiple UAVs: A path-following framework. Aerosp. Sci. Technol. 2023, 135, 108168. [Google Scholar] [CrossRef]
  25. Wang, B.; Zhang, Y.; Zhang, W. Integrated path planning and trajectory tracking control for quadrotor UAVs with obstacle avoidance in the presence of environmental and systematic uncertainties: Theory and experiment. Aerosp. Sci. Technol. 2022, 120, 107277. [Google Scholar] [CrossRef]
  26. Hu, X.B.; Yang, C.S.; Zhou, J.; Zhang, Y.F.; Ma, Y.M. Research on 3D layered visibility graph route network model and multi-objective path planning for UAVs in complex urban environments. Aerosp. Sci. Technol. 2025, 159, 109947. [Google Scholar] [CrossRef]
  27. Dong, S.; Das, S.; Townley, S. Drone motion prediction from flight data: A nonlinear time series approach. Syst. Sci. Control Eng. 2024, 12, 2409098. [Google Scholar]
  28. Conte, C.; de Alteriis, G.; Schiano Lo Moriello, R.; Accardo, D.; Rufino, G. Drone trajectory segmentation for real-time and adaptive time-of-flight prediction. Drones 2021, 5, 62. [Google Scholar] [CrossRef]
  29. Bhandary, R.; Alkouz, B.; Shahzaad, B.; Bouguettaya, A. Predictive precision of enhanced drone landings. Expert Syst. Appl. 2025, 266, 125830. [Google Scholar] [CrossRef]
  30. Kang, C.; Woolsey, C.A. Model-based path prediction for fixed-wing unmanned aircraft using pose estimates. Aerosp. Sci. Technol. 2020, 105, 106030. [Google Scholar] [CrossRef]
  31. Wang, H.; Wang, C.; Xie, L. Intensity-SLAM: Intensity assisted localization and mapping for large scale environment. IEEE Robot. Autom. Lett. 2021, 6, 1715–1721. [Google Scholar] [CrossRef]
  32. Chen, S.; Chen, B.; Shu, P.; Wang, Z.; Chen, C. Real-time unmanned aerial vehicle flight path prediction using a bi-directional long short-term memory network with error compensation. J. Comput. Des. Eng. 2023, 10, 16–35. [Google Scholar] [CrossRef]
  33. Shu, P.; Chen, C.; Chen, B.; Su, K.; Chen, S.; Liu, H.; Huang, F. Trajectory prediction of UAV based on LSTM. In Proceedings of the 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), Zhuhai, China, 24–26 September 2021; IEEE: Zhangjiajie, China, 2021; pp. 448–451. [Google Scholar]
  34. Zhang, J.; Shi, Z.; Zhang, A.; Yang, Q.; Shi, G.; Wu, Y. UAV trajectory prediction based on flight state recognition. IEEE Trans. Aerosp. Electron. Syst. 2023, 60, 2629–2641. [Google Scholar] [CrossRef]
  35. Dudukcu, H.V.; Taskiran, M.; Kahraman, N. UAV sensor data applications with deep neural networks: A comprehensive survey. Eng. Appl. Artif. Intell. 2023, 123, 106476. [Google Scholar] [CrossRef]
  36. Aly, M.; Ali, B.; Mekonnen, F.Y.; Elhesasy, M.; Wang, M.; Kamra, M.M.; Dief, T.N. Real-Flight-Path Tracking Control Design for Quadrotor UAVs: A Precision-Guided Approach. Automation 2025, 6, 93. [Google Scholar] [CrossRef]
  37. Su, Y.; Wang, M.C. An AutoML Algorithm: Multiple-Steps Ahead Forecasting of Correlated Multivariate Time Series with Anomalies Using Gated Recurrent Unit Networks. AI 2025, 6, 267. [Google Scholar] [CrossRef]
  38. Nacar, O.; Abdelkader, M.; Ghouti, L.; Gabr, K.; Al-Batati, A.; Koubaa, A. VECTOR: Velocity-Enhanced GRU Neural Network for Real-Time 3D UAV Trajectory Prediction. Drones 2025, 9, 8. [Google Scholar] [CrossRef]
  39. Yoon, S.; Jang, D.; Yoon, H.; Park, T.; Lee, K. GRU-Based Deep Learning Framework for Real-Time, Accurate, and Scalable UAV Trajectory Prediction. Drones 2025, 9, 142. [Google Scholar] [CrossRef]
  40. Zhang, J.; Li, J.; Yang, H.; Feng, X.; Sun, G. Complex environment path planning for unmanned aerial vehicles. Sensors 2021, 21, 5250. [Google Scholar] [CrossRef]
  41. Jiang, W.; Lyu, Y.; Li, Y.; Guo, Y.; Zhang, W. UAV path planning and collision avoidance in 3D environments based on POMPD and improved grey wolf optimizer. Aerosp. Sci. Technol. 2022, 121, 107314. [Google Scholar] [CrossRef]
  42. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  43. He, L.; Aouf, N.; Song, B. Explainable Deep Reinforcement Learning for UAV autonomous path planning. Aerosp. Sci. Technol. 2021, 118, 107052. [Google Scholar] [CrossRef]
  44. Puente-Castro, A.; Rivero, D.; Pedrosa, E.; Pereira, A.; Lau, N.; Fernandez-Blanco, E. Q-learning based system for path planning with unmanned aerial vehicles swarms in obstacle environments. Expert Syst. Appl. 2024, 235, 121240. [Google Scholar] [CrossRef]
  45. Yang, M.D.; Hsu, Y.C.; Liu, T.T.; Huang, H.H. Enhancing grain moisture prediction in multiple crop seasons using domain adaptation AI. Comput. Electron. Agric. 2025, 231, 110058. [Google Scholar] [CrossRef]
  46. Kebede, Y.B.; Yang, M.D.; Huang, C.W. Real-time pavement temperature prediction through ensemble machine learning. Eng. Appl. Artif. Intell. 2024, 135, 108870. [Google Scholar] [CrossRef]
  47. Yu, C.H.; Tsai, J.; Chang, Y.T. Intelligent Path Planning for UAV Patrolling in Dynamic Environments Based on the Transformer Architecture. Electronics 2024, 13, 4716. [Google Scholar] [CrossRef]
  48. Wu, X.; Liu, Z.; Yin, L.; Zheng, W.; Song, L.; Tian, J.; Liu, S. A haze prediction model in Chengdu based on LSTM. Atmosphere 2021, 12, 1479. [Google Scholar] [CrossRef]
  49. Islam, M.S.; Hossain, E. Foreign exchange currency rate prediction using a GRU-LSTM hybrid network. Soft Comput. Lett. 2021, 3, 100009. [Google Scholar] [CrossRef]
  50. Tian, C.; Ma, J.; Zhang, C.; Zhan, P. A deep neural network model for short-term load forecast based on long short-term memory network and convolutional neural network. Energies 2018, 11, 3493. [Google Scholar] [CrossRef]
Figure 1. Recorded UAV flight trajectories during data collection for two mission flights in Wufeng District, Taichung.
Figure 1. Recorded UAV flight trajectories during data collection for two mission flights in Wufeng District, Taichung.
Drones 10 00056 g001
Figure 2. Aerial view of the proposed site.
Figure 2. Aerial view of the proposed site.
Drones 10 00056 g002
Figure 3. Sample aerial image captured by the UAV, illustrating rice lodging conditions in the agricultural study area.
Figure 3. Sample aerial image captured by the UAV, illustrating rice lodging conditions in the agricultural study area.
Drones 10 00056 g003
Figure 4. Predictive performance of RNN, LSTM, and GRU models on training and testing datasets: (a) Mean Absolute Error (MAE) and Mean Squared Error (MSE); (b) Root Mean Squared Error (RMSE) and Coefficient of Determination (R2); (c) Mean Absolute Percentage Error (MAPE) and Coefficient of Variation of the Root Mean Squared Error (CV-RMSE); (d) Standard Error Ratio (SER) and Relative Standard Error (RSE); (e) Objective Function (OF) and Prediction Interval (PI).
Figure 4. Predictive performance of RNN, LSTM, and GRU models on training and testing datasets: (a) Mean Absolute Error (MAE) and Mean Squared Error (MSE); (b) Root Mean Squared Error (RMSE) and Coefficient of Determination (R2); (c) Mean Absolute Percentage Error (MAPE) and Coefficient of Variation of the Root Mean Squared Error (CV-RMSE); (d) Standard Error Ratio (SER) and Relative Standard Error (RSE); (e) Objective Function (OF) and Prediction Interval (PI).
Drones 10 00056 g004aDrones 10 00056 g004b
Figure 5. Residual distribution comparison across spatial axes for UAV path prediction using LSTM, GRU, and RNN: (a) Residual distribution for X (Easting); (b) Residual distribution for Y (Northing); (c) Residual distribution for Z (Altitude); and (d) Residual distribution for Time.
Figure 5. Residual distribution comparison across spatial axes for UAV path prediction using LSTM, GRU, and RNN: (a) Residual distribution for X (Easting); (b) Residual distribution for Y (Northing); (c) Residual distribution for Z (Altitude); and (d) Residual distribution for Time.
Drones 10 00056 g005
Figure 6. Comparative analysis of residuals against actual values in easting, northing, and altitude dimensions: (a) Residuals vs. Actual Values for X (Easting); (b) Residuals vs. Actual Values for Y (Northing); (c) Residuals vs. Actual Values for Z (Altitude); and (d) Residuals vs. Actual Values for Time.
Figure 6. Comparative analysis of residuals against actual values in easting, northing, and altitude dimensions: (a) Residuals vs. Actual Values for X (Easting); (b) Residuals vs. Actual Values for Y (Northing); (c) Residuals vs. Actual Values for Z (Altitude); and (d) Residuals vs. Actual Values for Time.
Drones 10 00056 g006
Figure 7. Comparative Residual Density Analysis for X, Y, and Z Coordinates in UAV Path Estimation: (a) Violin plot of residual density for X (Easting); (b) Violin plot of residual density for Y (Northing); (c) Violin plot of residual density for Z (Altitude); and (d) Violin plot of residual density for Time.
Figure 7. Comparative Residual Density Analysis for X, Y, and Z Coordinates in UAV Path Estimation: (a) Violin plot of residual density for X (Easting); (b) Violin plot of residual density for Y (Northing); (c) Violin plot of residual density for Z (Altitude); and (d) Violin plot of residual density for Time.
Drones 10 00056 g007
Figure 8. Comparative Residual Distributions with Gaussian Fits for UAV Path Prediction in X, Y, and Z Axes: (a) Residual distributions for X (Easting); (b) Residual distributions for Y (Northing); (c) Residual distributions for Z (Altitude); and (d) Residual distributions for Time.
Figure 8. Comparative Residual Distributions with Gaussian Fits for UAV Path Prediction in X, Y, and Z Axes: (a) Residual distributions for X (Easting); (b) Residual distributions for Y (Northing); (c) Residual distributions for Z (Altitude); and (d) Residual distributions for Time.
Drones 10 00056 g008
Figure 9. Training and Validation Loss Curves of the GRU Model for UAV Trajectory Prediction.
Figure 9. Training and Validation Loss Curves of the GRU Model for UAV Trajectory Prediction.
Drones 10 00056 g009
Figure 10. GRU Model’s Spatiotemporal Fidelity in Predicting X (Easting) Trajectory.
Figure 10. GRU Model’s Spatiotemporal Fidelity in Predicting X (Easting) Trajectory.
Drones 10 00056 g010
Figure 11. GRU Model’s Robustness in Predicting Y (Northing) Trajectory Dynamics.
Figure 11. GRU Model’s Robustness in Predicting Y (Northing) Trajectory Dynamics.
Drones 10 00056 g011
Figure 12. GRU Model’s Accuracy in Tracking Nonlinear Z (Altitude) Variations.
Figure 12. GRU Model’s Accuracy in Tracking Nonlinear Z (Altitude) Variations.
Drones 10 00056 g012
Figure 13. GRU Model’s Temporal Precision in Forecasting Flight Time.
Figure 13. GRU Model’s Temporal Precision in Forecasting Flight Time.
Drones 10 00056 g013
Figure 14. 3D Comparison of GRU-Predicted and true UAV Trajectories across spatial coordinates.
Figure 14. 3D Comparison of GRU-Predicted and true UAV Trajectories across spatial coordinates.
Drones 10 00056 g014
Figure 15. Predictive performance of GRU models on multi-step-ahead forecasting.
Figure 15. Predictive performance of GRU models on multi-step-ahead forecasting.
Drones 10 00056 g015
Figure 16. Multi-step-ahead forecasting percentage error of the GRU Model.
Figure 16. Multi-step-ahead forecasting percentage error of the GRU Model.
Drones 10 00056 g016
Figure 17. Overfitting Factor and Prediction Interval for GRU model multi-step-ahead forecasting.
Figure 17. Overfitting Factor and Prediction Interval for GRU model multi-step-ahead forecasting.
Drones 10 00056 g017
Figure 18. Comparison of Actual and GRU-Predicted UAV Trajectory over the 5-Step Forecast Horizon.
Figure 18. Comparison of Actual and GRU-Predicted UAV Trajectory over the 5-Step Forecast Horizon.
Drones 10 00056 g018aDrones 10 00056 g018b
Table 1. Statistical summary of the UAV trajectory dataset.
Table 1. Statistical summary of the UAV trajectory dataset.
VariableTime (S)X/Easting (m)Y/Northing (m)Z/Altitude (m)
mean4260215,684.82,662,137191.6
std2460.4868.11404.612
min0214,176.72,659,580176.6
max8520217,207.22,664,713209.6
Table 2. Hyperparameters and Training Configuration for the Proposed GRU Model.
Table 2. Hyperparameters and Training Configuration for the Proposed GRU Model.
ParameterSearch RangeOptimum Value
Number of Layers1–31
Hidden Units per Layer16–12864
Dropout Rate0.0–0.3 (step = 0.1)0
Initial Learning Rate0.0001–0.10.001
Batch Size16–6432
Sequence length(4, 12)6
OptimizerFixedAdam
Loss FunctionFixedMSE
Epochs 200(with Early Stopping)
Random state 42
Output Layer Dense (4 units)
Table 3. Predictive performance of the proposed time series algorithms on the training dataset for UAV path prediction.
Table 3. Predictive performance of the proposed time series algorithms on the training dataset for UAV path prediction.
ModelTraining Set Evaluation
MAE
(m)
MSE
(m2)
RMSE
(m)
R2MAPE (%)CV RMSE (%)SERRSE (%)OF (%)PI
LSTM0.00240.000020.00480.99972.09811.11720.01760.030.0023%±0.008
GRU0.00190.000010.00400.99981.58020.93490.01470.020.0016%±0.007
RNN0.00280.000050.00700.99932.12611.63360.02570.070.0049%±0.013
Table 4. Predictive performance of the proposed time series algorithms on the testing dataset for UAV path prediction.
Table 4. Predictive performance of the proposed time series algorithms on the testing dataset for UAV path prediction.
ModelTesting Set Evaluation
MAE
(m)
MSE
(m2)
RMSE
(m)
R2MAPE (%)CV RMSE (%)SERRSE (%)OF (%)PI
LSTM0.00510.000050.00720.99005.05920.99480.02260.0510.005±0.010
GRU0.00360.000030.00540.99231.74940.74840.01700.0280.003±0.0079
RNN0.00500.000040.00670.98852.09030.93090.02110.0450.005±0.0087
Table 5. Multi-step-ahead forecasting performance of the proposed time series algorithms on the testing dataset for UAV path prediction.
Table 5. Multi-step-ahead forecasting performance of the proposed time series algorithms on the testing dataset for UAV path prediction.
Multi-Step ForecastingMAE
(m)
RMSE
(m)
R2MAPE
(%)
CV-RMSE (%)SERRSE (%)OF (%)PI
(±95%)
Step 1 ahead0.0010.0130.9985.4291.7810.0410.1640.016±0.017
Step 2 ahead0.0080.0140.9987.4251.9340.0440.1940.019±0.023
Step 3 ahead0.0090.0160.9976.7222.1690.0490.2430.024±0.024
Step 4 ahead0.0080.0140.9985.6561.9400.0440.1920.019±0.023
Step 5 ahead0.0070.0140.9987.0621.8990.0430.1860.019±0.022
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kebede, Y.B.; Yang, M.-D.; Shikur, H.D.; Tseng, H.-H. Real-Time UAV Flight Path Prediction Using GRU Networks for Autonomous Site Assessment. Drones 2026, 10, 56. https://doi.org/10.3390/drones10010056

AMA Style

Kebede YB, Yang M-D, Shikur HD, Tseng H-H. Real-Time UAV Flight Path Prediction Using GRU Networks for Autonomous Site Assessment. Drones. 2026; 10(1):56. https://doi.org/10.3390/drones10010056

Chicago/Turabian Style

Kebede, Yared Bitew, Ming-Der Yang, Henok Desalegn Shikur, and Hsin-Hung Tseng. 2026. "Real-Time UAV Flight Path Prediction Using GRU Networks for Autonomous Site Assessment" Drones 10, no. 1: 56. https://doi.org/10.3390/drones10010056

APA Style

Kebede, Y. B., Yang, M.-D., Shikur, H. D., & Tseng, H.-H. (2026). Real-Time UAV Flight Path Prediction Using GRU Networks for Autonomous Site Assessment. Drones, 10(1), 56. https://doi.org/10.3390/drones10010056

Article Metrics

Back to TopTop