Next Article in Journal
Advancing Early Wildfire Detection: Integration of Vision Language Models with Unmanned Aerial Vehicle Remote Sensing for Enhanced Situational Awareness
Next Article in Special Issue
Empowering Smallholder Farmers with UAV-Based Early Cotton Disease Detection Using AI
Previous Article in Journal
Dermal Exposure of Operators, Bystanders and Residents Derived from Unmanned Aerial Spraying Systems (UASS) in Vineyard
Previous Article in Special Issue
Agricultural Drone-Based Variable-Rate N Application for Regulating Wheat Protein Content
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LSTM-H: A Hybrid Deep Learning Model for Accurate Livestock Movement Prediction in UAV-Based Monitoring Systems

by
Ayub Bokani
1,*,
Elaheh Yadegaridehkordi
1 and
Salil S. Kanhere
2
1
School of Engineering and Technology, Central Queensland University (CQU), Sydney, NSW 2000, Australia
2
School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, NSW 2052, Australia
*
Author to whom correspondence should be addressed.
Drones 2025, 9(5), 346; https://doi.org/10.3390/drones9050346
Submission received: 31 March 2025 / Revised: 1 May 2025 / Accepted: 1 May 2025 / Published: 3 May 2025
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture—2nd Edition)

Abstract

:
Accurately predicting livestock movement is a cornerstone of precision agriculture, enabling smarter resource management, improved animal welfare, and enhanced productivity. However, the unpredictable and dynamic nature of livestock behavior poses significant challenges for traditional mobility prediction models. This study introduces LSTM-H, a hybrid deep learning model that combines the sequential learning power of Long Short-Term Memory (LSTM) networks with the real-time correction capabilities of Kalman Filters (KFs) to enhance livestock movement prediction within UAV-based monitoring frameworks. The results demonstrate that LSTM-H achieves a mean error of just 11.51 m for the first step and 40.68 m over a 30-step prediction horizon, outperforming state-of-the-art models by 4.3–14.8 times. Furthermore, LSTM-H exhibits robustness across noisy and dynamic conditions, with a 90% probability of errors below 13 m, as shown through cumulative error analysis. This enhanced accuracy enables UAVs to optimize flight trajectories, reducing energy consumption and improving monitoring efficiency in real-world agricultural settings. By bridging deep learning and adaptive filtering, LSTM-H not only enhances prediction accuracy but also paves the way for scalable, real-time livestock and UAV monitoring systems with transformative potential for precision agriculture.

1. Introduction

The agricultural sector, particularly livestock farming, is undergoing a significant transformation due to the integration of advanced technologies. The ability to monitor and analyze animal movement patterns has become increasingly important, enabling improved livestock management, disease prevention, and optimized resource allocation [1]. As the global demand for meat and animal products is projected to rise by 70 percent over the next 50 years, maximizing efficiency in livestock farming through technological innovations is essential [2]. The advent of unmanned aerial vehicles (UAVs) has sparked innovation in precision agriculture, offering scalable and automated solutions for real-time livestock monitoring [3].
UAVs are particularly well suited for livestock monitoring due to their flexibility and ability to operate in remote areas. They can function as airborne base stations, collecting data from Internet of Things (IoT) sensor nodes in regions beyond the reach of conventional communication networks. Livestock health monitoring is critical for ensuring animal welfare, improving productivity, and mitigating disease outbreaks. Traditional methods of monitoring large herds over vast areas are often labor-intensive, time-consuming, and prone to human error [4]. UAVs provide a cost-effective and efficient alternative for continuous observation and health assessment [5].
Despite these advantages, deploying UAVs for livestock monitoring presents several challenges. One of the primary obstacles is accurately tracking livestock movements when their exact locations are unknown. UAVs must adapt their flight trajectories dynamically to locate and monitor animals efficiently. This requires advanced predictive models that anticipate livestock mobility patterns to optimize UAV path planning [6]. Additionally, UAVs face technical challenges such as limited battery life, rapid shifts in field of view, and tracking errors caused by sudden animal movements. Although efficient trajectory planning [7] and multi-UAV approaches [8] can address challenges such as limited battery life and scalability, the effectiveness of UAVs in livestock monitoring still relies heavily on predictive models that minimize trajectory deviations while ensuring efficient and adaptive coverage [9].
Several mobility prediction models [10,11] have been developed to address these challenges, including Kalman Filters (KFs) [12], Long Short-Term Memory (LSTM) networks [13], Spatial-Temporal Attentive LSTM (STA-LSTM) [14], and Extended Kalman Filter with Hidden Markov Model (EKF-HMM) [15]. KF-based methodologies are commonly employed to extract valuable trajectory information from datasets affected by noise. However, due to the inherently uncertain and dynamic nature of UAV trajectories, traditional KF-based approaches often exhibit limited tracking accuracy [16]. The LSTM model focuses primarily on analyzing the temporal sequence of data without accounting for the interconnections among different attributes. Furthermore, employing multiple LSTM networks to process multivariate and associated datasets can introduce significant redundancy [17]. The EKF, though widely used, relies on linearization, which introduces limitations. It approximates the mean and covariance using only the first-order terms of the Taylor series, reducing accuracy in highly nonlinear scenarios [18]. Some highly dynamic maneuvers of an adversary UAV may be identified through prior observations or common patterns in specific scenarios. These maneuvers can be modeled using a HMM, a data-driven technique that aids in classification and enhances system performance by providing richer state information [10]. In addition to the models discussed, other approaches have been studied to enhance prediction accuracy, including integrating hybrid models, refining algorithms, and incorporating additional data sources [19]. Despite these efforts, achieving high-precision mobility prediction remains critical for applications such as UAV-assisted livestock monitoring, where accurate tracking ensures efficient herd management and resource allocation.
To bridge this gap, this study proposes LSTM-H, a hybrid deep learning model that enhances livestock movement prediction accuracy. LSTM-H extends the standard LSTM by integrating the sequential learning power of LSTM networks with the real-time correction capabilities of KFs. This hybrid approach leverages historical movement data to predict future livestock positions while dynamically adjusting UAV flight trajectories. By improving prediction accuracy, LSTM-H enables UAVs to optimize their movements in real time, conserving energy and increasing monitoring efficiency.
The proposed model was designed to enhance livestock health monitoring by equipping smart UAVs with adaptive mobility prediction capabilities. By combining time-series analysis, real-time data collection, image processing, and deep learning, LSTM-H facilitates a robust and responsive monitoring system. UAVs utilizing this model can efficiently integrate predictive analytics with the real-time health data gathered from IoT-equipped livestock. Beyond improving livestock monitoring, this research has broader implications for wildlife conservation, environmental surveillance, and disaster response, where real-time adaptive aerial tracking is crucial.
The remainder of this paper is structured as follows: Section 2 reviews the existing studies on livestock mobility prediction. Section 3 discusses commonly used livestock movement prediction methods. The proposed hybrid prediction model is detailed in Section 4. Section 5 presents the experimental setup and results. Finally, Section 6 concludes this study.

2. Related Works

This section reviews the existing research on mobility prediction for livestock monitoring, focusing on key methodologies as well as their strengths and limitations. The principles of four dominant mobility prediction models, Kalman Filter (KF), Long Short-Term Memory (LSTM), Spatial–Temporal Attentive LSTM (STA-LSTM), and Extended Kalman Filter with Hidden Markov Model (EKF-HMM), are discussed in detail in Section 3.
Effective UAV trajectory planning in livestock health monitoring requires the integration of mobility prediction models to enhance operational efficiency. Accurate predictions of livestock movements and health statuses enable UAVs to dynamically adjust flight paths, prioritizing animals in critical health conditions. This synergy between trajectory planning and mobility prediction ensures optimal resource allocation based on real-time assessments. However, challenges remain in addressing fast-moving UAVs and the dynamic nature of livestock, making precise and timely routing decisions difficult [19].
Several studies have explored predictive models to address the patterns of and challenges with livestock mobility. For instance, Nicolas et al. [20] developed gravity models to predict livestock movement patterns considering factors such as supply, demand, and cultural events. Their models achieved moderate to good accuracy in predicting movement links between locations, highlighting the influence of human and sheep populations as well as the distance between locations on movement probabilities. In another study, Zhao and Jurdak [21] analyzed the high-frequency movement data of grazing cattle using a two-state ‘stop-and-move’ mobility model. They discovered that cattle movement patterns exhibit hierarchical structures and scale-invariant waiting times, suggesting complex underlying dynamics in grazing behavior. Suseendran and Balaganesh [22] proposed cattle movement monitoring and location prediction system using the Markov decision process (LPS-MDP). The experimental results from the data gathered through cows’ collars confirmed that LPS-MD minimizes the prediction cost and delay and increases the prediction accuracy.
A study Bajardi et al. [23] examined infectious disease outbreaks in livestock by tracking cattle movement in Italy. Using spatial disease simulations, they identified spreading pathways consistent across various initial conditions, enabling the clustering of outbreak origins and reducing variability in epidemic outcomes. This procedure offers a versatile framework for risk assessment and the design of optimal surveillance systems for specific diseases. In another study, Guo et al. [24] developed a model of animal movement using a Hidden Markov Model (HMM) for state transitions (e.g., relocating, foraging) and a long-term prediction algorithm for movement between “stay” areas. The model, based on real cow movement data, accurately replicated behavior and can be adapted to other species. Nielsen et al. [25] developed algorithms to detect walking and standing in dairy cows using movement sensors on their hind legs, measuring acceleration in three dimensions. Video-recorded behavior was analyzed to validate the algorithms. The best results, with a 10 percent misclassification rate, were achieved using a step count moving average over 3 s combined with a 5 s minimum walking rule. The findings demonstrate that walking and standing durations can be estimated with reasonable accuracy, with algorithm choice depending on specific needs.
Although previous studies have advanced livestock movement prediction using different techniques and frameworks, they often fall short in adapting to rapidly changing livestock movements and integrating real-time UAV observations with historical data. Furthermore, most models lack a unified framework that combines prediction with trajectory planning, limiting their applicability in dynamic agricultural settings.
To address these gaps, considering the successful application of LSTM and KF in the UAV mobility prediction domain [10,19,26], this paper introduces LSTM-H, a hybrid model that combines LSTM networks with the real-time correction capabilities of KF. LSTM-H leverages historical livestock movement data and synthetic geo-location data from UAV aerial imagery, simulating real-world observations with controlled noise. This integration ensures accurate and adaptive predictions of livestock movements, significantly improving UAV trajectory planning and monitoring efficiency. By combining deep learning with real-time corrections, the proposed LSTM-H model overcomes the limitations of standalone methods, offering a robust solution for livestock monitoring in dynamic and noisy environments.

3. Mobility Prediction Models

The ability to accurately predict livestock movements is a cornerstone of precision agriculture, enabling enhanced monitoring, resource allocation, and health management [27]. Livestock exhibit complex and dynamic behaviors that are influenced by various environmental and physiological factors. These inherent complexities, combined with the challenges of collecting real-world position data, make mobility prediction a non-trivial task.
To address these challenges, several state-of-the-art prediction models have been developed, each leveraging unique methodologies to forecast livestock movements. These models range from statistical filtering techniques to advanced deep learning frameworks. In this study, we evaluated four of the most prominent prediction models in this domain:
  • Real-time Kalman Filter (KF);
  • Long Short-Term Memory (LSTM);
  • Spatial–Temporal Attentive LSTM (STA-LSTM);
  • Extended Kalman Filter with Hidden Markov Model (EKF-HMM).
In addition, this paper introduces the proposed LSTM-Hybrid (LSTM-H) model, which combines the predictive capabilities of LSTM with the real-time adaptability of the KF. This hybrid approach is discussed in detail in Section 4.
In this section, we discuss the core principles, methodologies, and suitability of the aforementioned state-of-the-art models for livestock movement prediction.

3.1. Real-Time Kalman Filter (KF)

The Kalman Filter (KF), developed by Kalman [12], is a statistical filtering method that estimates the state of a dynamic system by combining noisy observations with a mathematical model of the system’s behavior. In the context of livestock movement prediction, the KF operates under a constant-velocity motion model, which assumes that livestock movement can be approximated as a linear process over short time intervals.
The KF operates in two main steps:
  • Prediction: using the previous state estimate, the filter predicts the next state based on the system’s motion model:
    x ^ t = A x ^ t 1 + B u t ,
    where x ^ t is the predicted state at time t, A is the state transition matrix, B is the control matrix, and u t is the control input.
  • Correction: the predicted state is corrected using the observed state z t , which accounts for measurement noise:
    x ^ t = x ^ t + K t ( z t H x ^ t ) ,
    where K t is the Kalman Gain, and H is the observation matrix.
The Kalman Gain, K t , determines the weight given to the observation versus the prediction and is computed as
K t = P t H T ( H P t H T + R ) 1 ,
where P t is the error covariance matrix, and R is the observed noise covariance.
By iteratively applying these steps, the KF provides real-time updates of livestock positions, making it suitable for systems with noisy positional data, such as GPS-based observations. However, its reliance on a simplistic constant velocity model limits its ability to capture the complex and non-linear movement patterns typical of livestock.

3.2. Long Short-Term Memory (LSTM)

The Long Short-Term Memory (LSTM) network is a type of recurrent neural network (RNN) designed to learn the long-term dependencies in sequential data [13]. Unlike traditional RNNs, LSTMs mitigate the vanishing gradient problem by incorporating memory cells and gating mechanisms, making them particularly well suited for livestock movement prediction, where historical movement patterns influence future positions.
An LSTM processes sequential data by maintaining a memory cell, c t , which is updated at each time step t through a combination of three gates:
  • The input gate ( i t ) determines how much of the current input should update the memory cell. It is calculated as
    i t = σ ( W i x t + U i h t 1 + b i ) ,
    where x t is the input vector at time t, h t 1 is the hidden state from the previous time step; W i and U i are the weight matrices for the input and recurrent connections, respectively; and b i is the bias vector. The function σ represents the sigmoid activation function.
  • The forget gate ( f t ) controls the extent to which the previous memory cell content is retained. It is computed as
    f t = σ ( W f x t + U f h t 1 + b f ) ,
    where W f and U f are the weight matrices for the input and recurrent connections, and b f is the bias vector.
  • The output gate ( o t ) regulates the output of the memory cell to the hidden state. It is given by
    o t = σ ( W o x t + U o h t 1 + b o ) ,
    where W o and U o are the weight matrices for the input and recurrent connections, and b o is the bias vector.
The memory cell, c t , is updated by combining the forget gate’s contribution from the previous memory cell and the input gate’s influence on the new information. The update is computed as
c t = f t c t 1 + i t tanh ( W c x t + U c h t 1 + b c ) ,
where c t 1 is the memory cell from the previous time step, W c and U c are the weight matrices for the input and recurrent connections, b c is the bias vector, and ⊙ represents element-wise multiplication.
Finally, the hidden state, h t , is computed as
h t = o t tanh ( c t ) .
For livestock movement prediction, the LSTM network processes historical positional data, typically represented as a sequence of ( x , y ) coordinates over a fixed time window. The network architecture includes an input layer to accept sequences of livestock positions, one or more LSTM layers to capture temporal dependencies, a fully connected layer to map the hidden state to the output, and a regression layer to minimize the mean squared error (MSE) between the predicted and actual positions. The loss function is defined as
L = 1 N i = 1 N y i y ^ i 2 ,
where y i and y ^ i are the actual and predicted positions, respectively, and N is the number of training samples.
The LSTM’s ability to learn from sequential data makes it effective in capturing the temporal patterns in livestock movements. However, it may struggle with real-time adaptability, especially when faced with noisy or sparse observations.

3.3. Spatial–Temporal Attentive LSTM (STA-LSTM)

The Spatial–Temporal Attentive LSTM (STA-LSTM) is an enhancement of the standard LSTM network, designed to focus on both spatial and temporal dependencies in sequential data [14]. While standard LSTMs are effective at modeling temporal patterns, they treat all parts of the sequence equally, which may lead to suboptimal predictions when certain spatial or temporal features are more significant. The STA-LSTM addresses this limitation by incorporating attention mechanisms that dynamically assign weights to important parts of the sequence.
The STA-LSTM introduces an attention layer after the LSTM layer to compute the importance scores for each time step in the sequence. These scores are used to adjust the contributions of different parts of the input sequence to the final prediction. Formally, the attention mechanism operates as follows:
  • Attention score computation: for each time step t, an attention score α t is computed based on the hidden state h t of the LSTM and a learnable context vector u:
    α t = exp ( u tanh ( W a h t + b a ) ) k = 1 T exp ( u tanh ( W a h k + b a ) ) ,
    where W a is a weight matrix, b a is a bias vector, and T is the total number of time steps in the input sequence.
  • Weighted hidden state: the hidden states are weighted by their corresponding attention scores to compute a context vector c:
    c = t = 1 T α t h t .
  • Final prediction: the context vector c is passed through a fully connected layer and activation function to produce the final prediction:
    y ^ = W c c + b c ,
    where W c is the weight matrix, and b c is the bias vector for the output layer.
In the context of livestock movement prediction, the STA-LSTM processes sequences of ( x , y ) coordinates while dynamically identifying and emphasizing important temporal and spatial patterns. The attention mechanism enables the model to focus on key movements or events in the sequence that are most predictive of future positions.
The STA-LSTM architecture comprises an input layer for positional sequences, one or more LSTM layers, an attention layer for computing importance scores, and an output layer for regression. By prioritizing critical information in the sequence, the STA-LSTM enhances the predictive capabilities of the standard LSTM, particularly in scenarios with irregular or non-uniform movement patterns. However, the model’s computational complexity is higher due to the additional attention mechanism, which requires learning extra parameters and performing dynamic computations during inference.

3.4. Extended Kalman Filter with Hidden Markov Model (EKF-HMM)

The Extended Kalman Filter (EKF) combined with the Hidden Markov Model (HMM) is a hybrid approach that leverages statistical filtering for state estimation and probabilistic modeling for behavioral classification [15]. This combination is particularly effective in dynamic systems where the underlying states are influenced by discrete, latent behaviors, such as grazing, walking, or resting in the context of livestock movements.

3.4.1. Extended Kalman Filter (EKF)

The EKF extends the standard Kalman Filter by linearizing the non-linear state transition and observation models using a first-order Taylor expansion. It operates in two main steps:
  • Prediction: the next state x ^ t is predicted based on the non-linear state transition model f:
    x ^ t = f ( x t 1 ) + w t ,
    where w t represents process noise, typically modeled as zero-mean Gaussian noise with covariance Q.
  • Correction: the predicted state is updated using the observed state z t , based on the non-linear observation model h:
    x ^ t = x ^ t + K t z t h ( x ^ t ) ,
    where K t is the Kalman Gain, computed as
    K t = P t H t ( H t P t H t + R ) 1 .
    Here, P t is the error covariance matrix, H t is the Jacobian matrix of h evaluated at x ^ t , and R is the measurement noise covariance.
The EKF efficiently estimates continuous states, such as position and velocity, while accounting for non-linearities in the system. However, it does not inherently account for the behavioral changes that influence these states.

3.4.2. Hidden Markov Model (HMM)

The HMM models the discrete behavioral states influencing livestock movement, such as grazing, walking, or resting. It represents these states as a Markov process with the following components:
  • A finite set of hidden states S = { s 1 , s 2 , , s N } .
  • State transition probabilities A = { a i j } , where a i j = P ( s j | s i ) represents the probability of transitioning from state s i to s j .
  • Observation probabilities B = { b j ( o t ) } , where b j ( o t ) = P ( o t | s j ) represents the likelihood of observing o t given state s j .
  • An initial state distribution π = { π i } , where π i = P ( s i ) .
The HMM classifies the behavioral state at each time step based on observed data, such as changes in acceleration or velocity.

3.4.3. Integration of EKF and HMM

In the EKF-HMM model, the EKF provides continuous state estimates, which are then used as inputs to the HMM for behavioral classification. The HMM refines these estimates by capturing the influence of discrete behavioral states on the continuous dynamics.
For livestock movement prediction, this integrated approach allows the system to simultaneously estimate position and velocity while identifying the underlying behavioral state. However, the EKF-HMM’s reliance on predefined behavioral models and its computational complexity can limit its applicability in scenarios with highly irregular or unpredictable movement patterns. Although the models discussed demonstrate advanced capabilities, they face limitations in handling noisy and dynamic movement patterns. To address these challenges, this paper proposes the LSTM-H model, which integrates predictive and corrective approaches for enhancing accuracy.

4. Methodology

The proposed LSTM-hybrid model in this paper addresses the limitations of existing prediction models by integrating sequential learning from LSTM networks with the real-time correction capabilities of the KF. This section provides an overview of the dataset, the generation of synthetic noisy observations to simulate real-world scenarios, and a detailed explanation of the architecture and workflow of the LSTM-hybrid model.

4.1. Dataset Description

The dataset used in this study comprised the GPS movement data for 18 cattle, collected across three farm trials at Easter Howgate Farm, Edinburgh, U.K., from June 2015 to October 2016, with an average duration of 10 days per animal [28]. Data were recorded using Afimilk Silent Herdsman Collars [29], which provided 3-axis accelerometer measurements (x, y, z) at 10 samples per second, yielding millions of data points per animal. The X axis aligned parallel to the animal’s body, Y axis perpendicular to the ground, and X axis perpendicular to the body, though the Z coordinate (altitude) was excluded due to its irrelevance for flat farm terrains. Each entry included
  • Timestamp: the time at which the livestock position was recorded (ISO 8601 format, e.g., YYYY-MM-DD HH:mm:ss.SSS).
  • X coordinate: the livestock’s horizontal position in meters within the farm boundary.
  • Y coordinate: the livestock’s vertical position in meters within the farm boundary.
  • Z coordinate: excluded, as noted above.
The dataset captured both active and resting periods, reflecting natural cattle behavior, including movement, eating, rumination, and sleep. To align with practical livestock monitoring objectives, which prioritize tracking during peak activity, the data were filtered to focus on the 7 a.m. to 7 p.m. periods, when cattle exhibit behaviors like eating and rumination, as supported by the Rumiwatch Halter classifications [30] (though not used in this study). The data were resampled to one sample per minute, resulting in 7200 samples per animal over 10 days. This resampling balanced computational efficiency and prediction accuracy, as second-by-second updates are unnecessary for livestock health monitoring. Figure 1 illustrates the movements of a single cattle over a 3 h duration, with data resampled in 60 s intervals for clarity. Figure 2 presents the movement heatmaps for cattle IDs 16 to 18 across four 3 h time intervals, showcasing their X and Y positions within a 3600 m × 3700 m area that encompassed all livestock geo-locations across the three farms. This area could include land beyond farm boundaries but assumed continuity to visualize movement patterns effectively.

4.2. Synthetic Data Generation

To simulate UAV-based livestock monitoring, synthetic data were generated by adding Gaussian noise to the GPS-derived cattle movement coordinates from the dataset described in Section 4.1 as follows:
x synthetic = x actual + ϵ ,
where ϵ N ( 0 , σ 2 ) represents Gaussian noise with zero mean and variance σ 2 . The noise level was set to 5% of the livestock’s coordinate range, reflecting typical geo-location inaccuracies in UAV aerial imagery, as reported in studies on UAV-based cattle tracking [31,32,33,34]. This approach replicates the real-world errors expected when extracting cattle positions from images captured with UAV-mounted cameras.
The synthetic data were critical because real-time UAV imagery, which uses detection algorithms to estimate cattle geo-locations, were unavailable for this GPS-based dataset. By adding 5% Gaussian noise, the synthetic data mimicked the noisy observations UAVs would provide, enabling the LSTM-H model to handle such inaccuracies. The model’s primary purpose is to predict livestock movements, allowing UAVs to dynamically adapt their flight trajectories to track animals across large farms efficiently. Unlike stationary cameras, which would require numerous units to cover expansive farm areas, UAVs provide cost-effective, dynamic imaging of varying regions, essential for real-time trajectory optimization [35]. The synthetic data compensate for the lack of real UAV observations, enhance model robustness against noise, and support future drone-based monitoring systems.

4.3. Proposed LSTM-Hybrid Model

The LSTM-hybrid model, integrating a Long Short-Term Memory (LSTM) network with a standard Kalman Filter (KF), was selected over the Extended Kalman Filter with Hidden Markov Model (EKF-HMM) benchmark for its effectiveness in predicting livestock movements on our GPS dataset. Unlike EKF-HMM, which targets non-linear systems and behavioral classification, the LSTM-H model leverages the standard KF’s simplicity for linear, constant-velocity movements, and LSTM’s ability to learn temporal dependencies. This avoids EKF-HMM’s computational complexity and reliance on predefined behavioral states, irrelevant for our continuous position prediction task.
In the proposed model, the LSTM predicts the next livestock position from a sequence of 25 normalized (X, Y) coordinates, optimized through extensive trials for prediction accuracy across diverse scenarios. This prediction initializes the KF’s state estimate. The KF, using a constant velocity model and state transition matrix, predicts the next position and corrects it by comparing to synthetic UAV observations with 5% Gaussian noise, mimicking real-world inaccuracies. The Kalman Gain balances the prediction and observation, prioritizing the more reliable source to minimize noise-induced errors. As discussed in Section 5, the LSTM-H model outperforms EKF-HMM by achieving higher prediction accuracy, with robust noise-handling and efficient real-time correction. The KF operates in two steps:
  • Prediction step: The KF takes the LSTM-predicted position, x ^ t LSTM , and projects it forward based on the system’s motion model:
    x ^ t = A x ^ t 1 + B u t ,
    where x ^ t is the predicted state at time t, A is the state transition matrix, B is the control matrix, and u t is the control input. In this case, the LSTM prediction x ^ t LSTM acts as the initial input for the KF.
  • Correction step: The KF refines the predicted state by incorporating noisy real-time observations z t , such as UAV-collected positions:
    x ^ t = x ^ t + K t ( z t H x ^ t ) ,
    where K t is the Kalman Gain, H is the observation matrix, and z t is the observed position. The Kalman Gain, K t , determines the weight given to the observation versus the prediction and is computed as
    K t = P t H T ( H P t H T + R ) 1 ,
    where P t is the error covariance matrix and R is the measurement noise covariance.
The parameters for the LSTM network and Kalman filter in the LSTM-H model were selected to optimize prediction accuracy and computational efficiency, as detailed in Table 1. These settings, validated through experiments on the cattle movement dataset, ensure robust performance across diverse movement patterns and noisy UAV-based observations. The workflow of the LSTM-hybrid model is summarized in Algorithm 1.
Table 1. Tuning parameters and rationale for LSTM, STA-LSTM, and Kalman Filter in LSTM-H.
Table 1. Tuning parameters and rationale for LSTM, STA-LSTM, and Kalman Filter in LSTM-H.
ParameterValueRationale
Max Epochs200Ensures convergence without overfitting, validated by monitoring validation loss.
Mini-Batch Size64Balances GPU memory usage and stable gradient updates for the dataset size.
Initial Learning Rate0.002Enables rapid initial learning, with a drop factor of 0.005 every 80 epochs for stability.
Learning Rate Drop Factor0.005Facilitates fine-tuning in later training stages to capture complex temporal patterns.
Learning Rate Drop Period80Allows sufficient epochs per learning rate for stable convergence.
Gradient Threshold1.2Prevents gradient explosion, ensuring stable training.
L2 Regularization0.002Reduces overfitting, validated through cross-validation.
Variance Coefficient ( σ )0.4Scales error calculations to match synthetic noise level (5% of coordinate range).
Process Noise Covariance (Q)[1, 1]Reflects expected movement variability, prioritizing LSTM predictions.
Measurement Noise Covariance (R)[1, 1]Matches synthetic noise variance, correcting noisy UAV observations.
Control Noise0.1Ensures filter stability for constant velocity model.
Algorithm 1 LSTM-hybrid model workflow
 1: Input: Historical livestock positions, noisy real-time observations
2: Output: Corrected livestock position predictions
3: Train the LSTM model on historical movement data
4: for each time step t do
5:  Predict the livestock’s position using the LSTM ( x ^ t LSTM )
6:  Use the LSTM prediction as the initial state for the KF
7:  Perform the KF prediction step to estimate the next state
8:  Perform the KF correction step using real-time noisy observations
9: end for
This hybrid framework effectively combines deep learning and statistical filtering for accurate and robust livestock movement prediction. By passing the LSTM’s predictions as the input to the Kalman Filter, the model benefits from the temporal learning of the LSTM and the noise handling and adaptability of the KF. This integration makes the hybrid model particularly well suited for dynamic and noisy environments.
The methodologies described in this section, including the dataset preparation, synthetic data generation, and the design of the LSTM-hybrid model, form the foundation for evaluating and comparing the prediction models. The next section presents the results of this evaluation, highlighting the performance of the proposed hybrid model in comparison to state-of-the-art alternatives.

5. Results

This section presents the evaluation results of the proposed LSTM-H model alongside the baseline models: LSTM, STA-LSTM, KF, and EKF-HMM. The performance of each model was assessed using the mean prediction error, visualized through multiple figures, which are summarized in Table 2.

5.1. First-Step Prediction Accuracy Comparison

Figure 3 compares the actual positions of cow # 18 with the predicted positions using the LSTM model and the proposed LSTM-H model. While the LSTM model captures the general trend in cattle movements, it exhibits a significant prediction error at each point. In contrast, the corrected predictions generated by the LSTM-H model align much more closely with the actual livestock movements, demonstrating its superior accuracy.
The comparison between the LSTM, STA-LSTM, and the LSTM-H models is illustrated in Figure 4a, which shows prediction errors over time. Both the LSTM and STA-LSTM models exhibit significantly higher and more volatile errors (9.7× and 9.5×, respectively) compared to the LSTM-H model, highlighting the advantages of incorporating real-time corrections. The LSTM-H model, on the other hand, achieves consistently lower errors by leveraging the LSTM’s sequential learning with the KF’s real-time adaptability. A hybrid version of the STA-LSTM model was also evaluated and produced very similar results to those of the LSTM-H model. For simplicity and clarity, these results are not included in the analysis.
The results for the LSTM and STA-LSTM models were achieved using the training parameters listed in Table 1. These parameters were selected to achieve comparable results for both models, balancing prediction accuracy and computational efficiency. However, with different tunings, better or worse results were obtained for each model. This tradeoff between accuracy and computational cost requires further study to find an optimal solution for specific scenarios, which was beyond the scope of this study.
As illustrated in Figure 4b, the KF and EKF-HMM models also exhibit significantly higher mean errors compared to the LSTM-H model (8.9× and 14.8×, respectively). The KF model, while computationally efficient, fails to account for long-term temporal dependencies, resulting in a mean error of 102.04 m. The EKF-HMM model, despite integrating behavioral classification, suffers from high complexity and large error variance, with a mean error of 170.29 m.

5.2. Error Distribution and Cumulative Analysis

Figure 5 provides a boxplot visualization of the error distributions of all models. The LSTM-H model achieves the smallest error spread, with median errors and outliers significantly lower than the other models. This highlights its reliability and robustness in predicting livestock movements under noisy and dynamic conditions.
Figure 6 illustrates the cumulative distribution function (CDF) of the prediction errors. The LSTM-H model achieves a 90% probability of errors below 13 m, compared to significantly higher thresholds for other models. The 13 m mean prediction error, averaged across all time points and not cumulative, is acceptable for livestock monitoring given the expansive farm areas and inherent GPS noise. This demonstrates the model’s superior accuracy and reliability for livestock movement prediction.

5.3. Prediction Accuracy over 30 Steps

The evaluation spanned a 30 min prediction horizon, with each step representing 1 min. Figure 7 highlights the prediction errors for all models over these steps. While the proposed model’s performance declines gradually with time, it consistently outperforms the benchmarks. The STA-LSTM model, with its spatiotemporal attention mechanism, demonstrates better accuracy than the standard LSTM model as steps increase, even though both are aligned at the first step. However, this improved accuracy, which comes at the cost of higher computational demands, is still 4.3-fold less efficient than the proposed model.

5.4. Summary of Model Performance

Table 2 presents the overall performance of all models, highlighting their mean prediction errors for both the first step and the average across 30 prediction steps. The LSTM-H model demonstrates remarkable accuracy, with the other models exhibiting errors that are 8.9× to 14.8× higher in the first step and 4.3× to 7.5× higher over the entire 30-step horizon.

6. Conclusions and Discussion

This study developed LSTM-H, a hybrid deep-learning model that enhances livestock movement prediction by integrating LSTM’s sequential learning with the Kalman Filter’s real-time corrections. Evaluations against benchmark models (KF, LSTM, STA-LSTM, EKF-HMM) demonstrated its superior accuracy, with LSTM-H achieving a mean prediction error of 11.51 m for the first step and 40.68 m over a 30-step horizon, a 4.3× to 14.8× improvement. By combining deep-learning-based sequence modeling with real-time corrections, LSTM-H effectively adapts to noisy and dynamic movement patterns, outperforming traditional models that struggle with non-linear behaviors. This robustness makes LSTM-H a promising tool for precision AI-driven livestock monitoring, particularly in challenging environments where UAV-based observations are subject to noise and variability.
The implications of LSTM-H for agriculture and livestock management are significant. Its high-precision tracking capabilities enable farmers to minimize livestock loss by identifying stray animals, enhance disease control through the early detection of abnormal movement patterns, and optimize grazing patterns to improve land use efficiency. These features support data-driven decision making, facilitating efficient resource allocation, such as water and feed distribution, as well as strategic infrastructure planning, like fencing or shelter placement, particularly when traditional methods are economically or practically infeasible [36]. By enhancing monitoring accuracy, LSTM-H promotes sustainable farming practices to meet the growing global demand for animal products, offering economic benefits for resource-limited farmers through remote monitoring and instant alerts for sick or distressed animals [37]. Furthermore, LSTM-H’s predictive capabilities can extend beyond livestock to broader agricultural applications. When integrated with UAVs, it supports crop disease surveillance, pest detection, and precision fertilizer application by leveraging real-time and historical data, reducing waste, minimizing environmental impact, and optimizing crop yields [38,39,40,41].
Future research should explore hyperparameter optimization to enhance computational efficiency, reducing processing demands for real-time applications. Integrating LSTM-H with UAV trajectory planning could enable autonomous, scalable livestock monitoring systems, minimizing energy consumption and extending coverage. Additionally, validating LSTM-H with real-world UAV datasets would strengthen its practical applicability, addressing the limitations of synthetic data. These advancements can drive AI-powered solutions that improve sustainability and efficiency in modern agriculture, fostering resilient livestock management systems.

Author Contributions

A.B. led the conceptualization, methodology, software development, formal analysis, investigation, data curation, visualization, and project administration. He also wrote the original draft for Section 3, Section 4, and Section 5. E.Y. contributed to conceptualization, led the validation, and wrote the original draft for Section 1, Section 2, and Section 6, while sharing review and editing duties equally. S.S.K. supported the conceptualization, led the supervision, and contributed equally to review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this study are publicly available GPS movement data for cattle, collected at Easter Howgate Farm, Edinburgh, U.K., and accessible at https://doi.org/10.5281/zenodo.4064802 (accessed on 30 April 2025), as described in [28]. No new data were generated during this study.

DURC Statement

This research was limited to the field of livestock movement prediction, which is beneficial for precision agriculture and UAV-based monitoring and does not pose a threat to public health or national security. The authors acknowledge the dual-use potential of the research involving AI-driven animal tracking and confirm that all necessary precautions have been taken to prevent potential misuse. As an ethical responsibility, the authors strictly adhered to the relevant national and international laws regarding Dual-Use Research of Concern (DURC). The authors advocate for responsible deployment, ethical considerations, regulatory compliance, and transparent reporting to mitigate misuse risks and foster beneficial outcomes.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ergunsah, S.; Tümen, V.; Kosunalp, S.; Demir, K. Energy-efficient animal tracking with multi-unmanned aerial vehicle path planning using reinforcement learning and wireless sensor networks. Concurr. Comput. Pract. Exp. 2023, 35, e7527. [Google Scholar] [CrossRef]
  2. Neethirajan, S.; Kemp, B. Digital livestock farming. Sens. Bio-Sens. Res. 2021, 32, 100408. [Google Scholar] [CrossRef]
  3. Moradi, S.; Bokani, A.; Hassan, J. UAV-based smart agriculture: A review of UAV sensing and applications. In Proceedings of the 2022 32nd International Telecommunication Networks and Applications Conference (ITNAC), Wellington, New Zealand, 30 November–2 December 2022; pp. 181–184. [Google Scholar]
  4. Neethirajan, S. The role of sensors, big data and machine learning in modern animal farming. Sens. Bio-Sens. Res. 2020, 29, 100367. [Google Scholar] [CrossRef]
  5. Mukhamediev, R.I.; Yakunin, K.; Aubakirov, M.; Assanov, I.; Kuchin, Y.; Symagulov, A.; Levashenko, V.; Zaitseva, E.; Sokolov, D.; Amirgaliyev, Y. Coverage path planning optimization of heterogeneous UAVs group for precision agriculture. IEEE Access 2023, 11, 5789–5803. [Google Scholar] [CrossRef]
  6. Benalaya, N.; Adjih, C.; Amdouni, I.; Laouiti, A.; Saidane, L. UAV search path planning for livestock monitoring. In Proceedings of the 2022 IEEE 11th IFIP International Conference on Performance Evaluation and Modeling in Wireless and Wired Networks (PEMWN), Rome, Italy, 8–10 November 2022; pp. 1–6. [Google Scholar]
  7. Salehi, S.; Bokani, A.; Hassan, J.; Kanhere, S.S. Aetd: An application-aware, energy-efficient trajectory design for flying base stations. In Proceedings of the 2019 IEEE 14th Malaysia International Conference on Communication (MICC), Selangor, Malaysia, 2–4 December 2019; pp. 19–24. [Google Scholar]
  8. Salehi, S.; Hassan, J.; Bokani, A. An optimal multi-uav deployment model for uav-assisted smart farming. In Proceedings of the 2022 IEEE Region 10 Symposium (TENSYMP), Mumbai, India, 1–3 July 2022; pp. 1–6. [Google Scholar]
  9. Monteiro, A.; Santos, S.; Gonçalves, P. Precision agriculture for crop and livestock farming—Brief review. Animals 2021, 11, 2345. [Google Scholar] [CrossRef]
  10. Strong, A.K.; Martin, S.M.; Bevly, D.M. Utilizing Hidden Markov Models to Classify Maneuvers and Improve Estimates of an Unmanned Aerial Vehicle. IFAC-PapersOnLine 2021, 54, 449–454. [Google Scholar] [CrossRef]
  11. Kant, R.; Saini, P.; Kumari, J. Long short-term memory auto-encoder-based position prediction model for fixed-wing UAV during communication failure. IEEE Trans. Artif. Intell. 2022, 4, 173–181. [Google Scholar] [CrossRef]
  12. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Mar. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  13. Song, X.; Liu, Y.; Xue, L.; Wang, J.; Zhang, J.; Wang, J.; Jiang, L.; Cheng, Z. Time-series well performance prediction based on Long Short-Term Memory (LSTM) neural network model. J. Pet. Sci. Eng. 2020, 186, 106682. [Google Scholar] [CrossRef]
  14. Jiang, R.; Xu, H.; Gong, G.; Kuang, Y.; Liu, Z. Spatial-temporal attentive LSTM for vehicle-trajectory prediction. ISPRS Int. J. Geo-Inf. 2022, 11, 354. [Google Scholar] [CrossRef]
  15. Farjad-Pezeshk, N.; Asadpour, V. Robust chaotic parameter modulation based on hybrid extended Kalman filter and hidden Markov model detector. In Proceedings of the 2011 19th Iranian Conference on Electrical Engineering, Tehran, Iran, 17–19 May 2011; pp. 1–4. [Google Scholar]
  16. Al-Absi, M.A.; Fu, R.; Kim, K.H.; Lee, Y.S.; Al-Absi, A.A.; Lee, H.J. Tracking unmanned aerial vehicles based on the Kalman filter considering uncertainty and error aware. Electronics 2021, 10, 3067. [Google Scholar] [CrossRef]
  17. Alos, A.; Dahrouj, Z. Using MLSTM and multioutput convolutional LSTM algorithms for detecting anomalous patterns in streamed data of unmanned aerial vehicles. IEEE Aerosp. Electron. Syst. Mag. 2021, 37, 6–15. [Google Scholar] [CrossRef]
  18. Gośliński, J.; Giernacki, W.; Królikowski, A. A nonlinear filter for efficient attitude estimation of unmanned aerial vehicle (UAV). J. Intell. Robot. Syst. 2019, 95, 1079–1095. [Google Scholar] [CrossRef]
  19. Wu, Q.; Zhang, M.; Dong, C.; Feng, Y.; Yuan, Y.; Feng, S.; Quek, T.Q. Routing protocol for heterogeneous FANETs with mobility prediction. China Commun. 2022, 19, 186–201. [Google Scholar] [CrossRef]
  20. Nicolas, G.; Apolloni, A.; Coste, C.; Wint, G.W.; Lancelot, R.; Gilbert, M. Predictive gravity models of livestock mobility in Mauritania: The effects of supply, demand and cultural factors. PLoS ONE 2018, 13, e0199547. [Google Scholar] [CrossRef] [PubMed]
  21. Zhao, K.; Jurdak, R. Understanding the spatiotemporal pattern of grazing cattle movement. Sci. Rep. 2016, 6, 31967. [Google Scholar] [CrossRef]
  22. Suseendran, G.; Balaganesh, D. Cattle movement monitoring and location prediction system using Markov decision process and IoT sensors. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; pp. 188–192. [Google Scholar]
  23. Bajardi, P.; Barrat, A.; Savini, L.; Colizza, V. Optimizing surveillance for livestock disease spreading through animal movements. J. R. Soc. Interface 2012, 9, 2814–2825. [Google Scholar] [CrossRef]
  24. Guo, Y.; Poulton, G.; Corke, P.; Bishop-Hurley, G.; Wark, T.; Swain, D.L. Using accelerometer, high sample rate GPS and magnetometer data to develop a cattle movement and behaviour model. Ecol. Model. 2009, 220, 2068–2075. [Google Scholar] [CrossRef]
  25. Nielsen, L.R.; Pedersen, A.R.; Herskin, M.S.; Munksgaard, L. Quantifying walking and standing behaviour of dairy cows using a moving average based on output from an accelerometer. Appl. Anim. Behav. Sci. 2010, 127, 12–19. [Google Scholar] [CrossRef]
  26. Jiandong, Z.; Yukun, G.; Lihui, Z.; Qiming, Y.; Guoqing, S.; Yong, W. Real-time UAV path planning based on LSTM network. J. Syst. Eng. Electron. 2024, 35, 374–385. [Google Scholar]
  27. Neethirajan, S. Artificial intelligence and sensor innovations: Enhancing livestock welfare with a human-centric approach. Hum.-Centric Intell. Syst. 2024, 4, 77–92. [Google Scholar] [CrossRef]
  28. Pavlovic, D.; Davison, C.; Hamilton, A.; Marko, O.; Atkinson, R.; Michie, C.; Crnojevic, V.; Andonovic, I.; Bellekens, X.; Tachtatzis, C. Classification of Cattle Behaviours Using Neck-Mounted Accelerometer-Equipped Collars and Convolutional Neural Networks. Sensors 2021, 21, 4050. [Google Scholar] [CrossRef]
  29. Afimilk Ltd. Afimilk Silent Herdsman-Smart Cow Neck Collar. 2016. Available online: http://www.afimilk.com (accessed on 26 April 2025).
  30. Weinert-Nelson, J.R.; Werner, J.; Jacobs, A.A.; Anderson, L.; Williams, C.A.; Davis, B.E. Evaluation of the RumiWatch system as a benchmark to monitor feeding and locomotion behaviors of grazing dairy cows. J. Dairy Sci. 2025, 108, 735–749. [Google Scholar] [CrossRef]
  31. Luo, W.; Zhang, G.; Shao, Q.; Zhao, Y.; Wang, D.; Zhang, X.; Liu, K.; Li, X.; Liu, J.; Wang, P.; et al. An efficient visual servo tracker for herd monitoring by UAV. Sci. Rep. 2024, 14, 10463. [Google Scholar] [CrossRef]
  32. Shen, P.; Wang, F.; Luo, W.; Zhao, Y.; Li, L.; Zhang, G.; Zhu, Y. Based on improved joint detection and tracking of UAV for multi-target detection of livestock. Heliyon 2024, 10, e38316. [Google Scholar] [CrossRef] [PubMed]
  33. Luo, W.; Zhang, G.; Yuan, Q.; Zhao, Y.; Chen, H.; Zhou, J.; Meng, Z.; Wang, F.; Li, L.; Liu, J.; et al. High-precision tracking and positioning for monitoring Holstein cattle. PLoS ONE 2024, 19, e0302277. [Google Scholar] [CrossRef]
  34. Fleming, C.; Drescher-Lehman, J.; Noonan, M.; Akre, T.; Brown, D.; Cochrane, M.; Dejid, N.; DeNicola, V.; DePerno, C.; Dunlop, J.; et al. A comprehensive framework for handling location error in animal tracking data. bioRxiv 2020. [Google Scholar] [CrossRef]
  35. Liu, K.; Zheng, J. UAV trajectory optimization for time-constrained data collection in UAV-enabled environmental monitoring systems. IEEE Internet Things J. 2022, 9, 24300–24314. [Google Scholar] [CrossRef]
  36. Nyamuryekung’e, S. Transforming ranching: Precision livestock management in the Internet of Things era. Rangelands 2024, 46, 13–22. [Google Scholar] [CrossRef]
  37. Panda, S.S.; Terrill, T.H.; Siddique, A.; Mahapatra, A.K.; Morgan, E.R.; Pech-Cervantes, A.A.; Van Wyk, J.A. Development of a decision support system for animal health management using geo-information technology: A novel approach to precision livestock management. Agriculture 2024, 14, 696. [Google Scholar] [CrossRef]
  38. Velusamy, P.; Rajendran, S.; Mahendran, R.K.; Naseer, S.; Shafiq, M.; Choi, J.G. Unmanned Aerial Vehicles (UAV) in precision agriculture: Applications and challenges. Energies 2021, 15, 217. [Google Scholar] [CrossRef]
  39. Toscano, F.; Fiorentino, C.; Capece, N.; Erra, U.; Travascia, D.; Scopa, A.; Drosos, M.; D’Antonio, P. Unmanned Aerial Vehicle for Precision Agriculture: A Review. IEEE Access 2024, 12, 69188–69205. [Google Scholar] [CrossRef]
  40. Kishor, I.; Agrawal, D.; Jain, G.; Dadhich, A.; Gautam, S. Crop fertilization using unmanned aerial vehicles (UAV’s). In Recent Advances in Sciences, Engineering, Information Technology & Management; CRC Press: Boca Raton, FL, USA, 2025; pp. 312–320. [Google Scholar]
  41. Shehab, M.M.I.; Jany, M.R.; Alam, S.; Sarker, A.; Hossen, M.S.; Shufian, A. Precision Farming with Autonomous Drones: Real-time Data and Efficient Fertilization Techniques. In Proceedings of the 2025 2nd International Conference on Advanced Innovations in Smart Cities (ICAISC), Jeddah, Saudi Arabia, 9–11 February 2025; pp. 1–6. [Google Scholar]
Figure 1. Cattle movements over time (sampling Interval: 60 s, cattle: 18, duration: 3 h).
Figure 1. Cattle movements over time (sampling Interval: 60 s, cattle: 18, duration: 3 h).
Drones 09 00346 g001
Figure 2. Movement heatmaps for cattle IDs 16–18 across four 3 h time intervals, showing X and Y positions within a 3600 m × 3700 m area.
Figure 2. Movement heatmaps for cattle IDs 16–18 across four 3 h time intervals, showing X and Y positions within a 3600 m × 3700 m area.
Drones 09 00346 g002
Figure 3. Cattle movement prediction over time: actual positions against LSTM- and LSTM-H-predicted geo-locations (sampling Interval: 60 s, cow: 18, duration: 3 h).
Figure 3. Cattle movement prediction over time: actual positions against LSTM- and LSTM-H-predicted geo-locations (sampling Interval: 60 s, cow: 18, duration: 3 h).
Drones 09 00346 g003
Figure 4. Comparison of prediction errors over time for the LSTM-H model against (a) LSTM and STA-LSTM, (b) KF and EKF-HMM. The LSTM-H model significantly outperforms the other models.
Figure 4. Comparison of prediction errors over time for the LSTM-H model against (a) LSTM and STA-LSTM, (b) KF and EKF-HMM. The LSTM-H model significantly outperforms the other models.
Drones 09 00346 g004
Figure 5. Boxplot of prediction errors of all models. The LSTM-H model achieves the lowest error spread, emphasizing its robustness compared to other models.
Figure 5. Boxplot of prediction errors of all models. The LSTM-H model achieves the lowest error spread, emphasizing its robustness compared to other models.
Drones 09 00346 g005
Figure 6. Cumulative distribution function (CDF) of prediction errors. The LSTM-H model exhibits the highest probability of achieving low errors among all other models.
Figure 6. Cumulative distribution function (CDF) of prediction errors. The LSTM-H model exhibits the highest probability of achieving low errors among all other models.
Drones 09 00346 g006
Figure 7. Mean errors of all models across 30 prediction steps. The LSTM-H model demonstrates the lowest errors across the entire horizon, significantly outperforming the other models.
Figure 7. Mean errors of all models across 30 prediction steps. The LSTM-H model demonstrates the lowest errors across the entire horizon, significantly outperforming the other models.
Drones 09 00346 g007
Table 2. Prediction errors and relative increase compared to LSTM-H for step 1 and the average of steps 1–30.
Table 2. Prediction errors and relative increase compared to LSTM-H for step 1 and the average of steps 1–30.
ModelStep 1 ErrorIncreaseStep 1–30 ErrorsIncrease
LSTM-H11.51 m-40.68 m-
LSTM111.87 m9.7×220.64 m5.4×
STA-LSTM109.11 m9.5×173.40 m4.3×
KF102.04 m8.9×294.92 m7.2×
EKF-HMM170.29 m14.8×306.57 m7.5×
Note: The proposed model’s results in the first row are highlighted to distinguish from others.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bokani, A.; Yadegaridehkordi, E.; Kanhere, S.S. LSTM-H: A Hybrid Deep Learning Model for Accurate Livestock Movement Prediction in UAV-Based Monitoring Systems. Drones 2025, 9, 346. https://doi.org/10.3390/drones9050346

AMA Style

Bokani A, Yadegaridehkordi E, Kanhere SS. LSTM-H: A Hybrid Deep Learning Model for Accurate Livestock Movement Prediction in UAV-Based Monitoring Systems. Drones. 2025; 9(5):346. https://doi.org/10.3390/drones9050346

Chicago/Turabian Style

Bokani, Ayub, Elaheh Yadegaridehkordi, and Salil S. Kanhere. 2025. "LSTM-H: A Hybrid Deep Learning Model for Accurate Livestock Movement Prediction in UAV-Based Monitoring Systems" Drones 9, no. 5: 346. https://doi.org/10.3390/drones9050346

APA Style

Bokani, A., Yadegaridehkordi, E., & Kanhere, S. S. (2025). LSTM-H: A Hybrid Deep Learning Model for Accurate Livestock Movement Prediction in UAV-Based Monitoring Systems. Drones, 9(5), 346. https://doi.org/10.3390/drones9050346

Article Metrics

Back to TopTop