Next Article in Journal
Experimental Investigation of Relative Localization Estimation in a Coordinated Formation Control of Low-Cost Underwater Drones
Next Article in Special Issue
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
Previous Article in Journal
Delay-Aware and Link-Quality-Aware Geographical Routing Protocol for UANET via Dueling Deep Q-Network
Previous Article in Special Issue
UWB Sensing for UAV and Human Comparative Movement Characterization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment †

by
Alicia Roux
1,2,*,
Sébastien Changey
1,
Jonathan Weber
2 and
Jean-Philippe Lauffenburger
2
1
French-German Research Institute of Saint-Louis, 5 Rue du Général Casssagnou, 68300 Saint-Louis, France
2
Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), Université de Haute-Alsace, 2 Rue des Frères Lumière, 68100 Mulhouse, France
*
Author to whom correspondence should be addressed.
Conference on Artificial Intelligence for Defense, DGA Maîtrise de l’Information, 16–17 November 2022, Rennes, France.
Sensors 2023, 23(6), 3025; https://doi.org/10.3390/s23063025
Submission received: 6 February 2023 / Revised: 27 February 2023 / Accepted: 5 March 2023 / Published: 10 March 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)

Abstract

:
This paper presents a deep learning approach to estimate a projectile trajectory in a GNSS-denied environment. For this purpose, Long-Short-Term-Memories (LSTMs) are trained on projectile fire simulations. The network inputs are the embedded Inertial Measurement Unit (IMU) data, the magnetic field reference, flight parameters specific to the projectile and a time vector. This paper focuses on the influence of LSTM input data pre-processing, i.e., normalization and navigation frame rotation, leading to rescale 3D projectile data over similar variation ranges. In addition, the effect of the sensor error model on the estimation accuracy is analyzed. LSTM estimates are compared to a classical Dead-Reckoning algorithm, and the estimation accuracy is evaluated via multiple error criteria and the position errors at the impact point. Results, presented for a finned projectile, clearly show the Artificial Intelligence (AI) contribution, especially for the projectile position and velocity estimations. Indeed, the LSTM estimation errors are reduced compared to a classical navigation algorithm as well as to GNSS-guided finned projectiles.

1. Introduction

Projectile navigation is mainly based on IMU (Inertial Measurement Unit) and GNSS (Global Navigation Satellite System) measurements due to the high dynamic constraints imposed on projectiles and the low-cost sensor requirements. Classically, IMU and GNSS measurements are combined with Kalman Filters to estimate a trajectory. The IMU measurements are integrated to predict the trajectory in order to be corrected by the GNSS receiver measurements [1,2,3,4]. Nevertheless, GNSS signals are not always available due to the environment configuration and are vulnerable to jamming and spoofing [5,6,7]. For this purpose, users aim to exclude these measurements for trajectory estimation [8,9,10].
Moreover, Artificial Intelligence (AI) is increasingly used for defense applications such as surveillance, reconnaissance, tracking or navigation [11,12,13,14,15]. Indeed, AI is an interesting approach to correct model approximations, to limit the influence of incomplete or incorrect measurements or to determine complex models from system data. Therefore, this paper presents an AI-based projectile trajectory estimation method in a GNSS-denied environment using only the embedded IMU and pre-flight parameters specific to the ammunition.
Considering that a trajectory is a time series, AI provides interesting approaches for its estimation. Indeed, Recurrent Neural Networks (RNNs) are perfectly adapted to time series prediction as illustrated in [16,17,18]. RNNs are composed by feedback loops, i.e., they memorize past data through hidden states to predict future data. However, the simplest form of RNNs exhibits convergence issues during the training step such as vanishing or exploding gradient problems. So another form of recurrent network is considered: the Long Short-Term Memory (LSTM) [19,20]. A LSTM includes a memory cell in addition to hidden states, in order to capture both long-term and short-term time dependencies.
This paper presents an AI-based solution to estimate a projectile trajectory in a GNSS-denied environment. In summary, the main contributions of this work are:
to detail an LSTM-based approach to estimate projectile positions, velocities and Euler angles from the embedded IMU, the magnetic field reference, pre-flight parameters and a time vector.
to present BALCO (BALlistic COde) [21] used to generate the dataset. This simulator provides true-to-life trajectories of several projectiles according to the ammunition parameters.
to investigate different normalization forms of the LSTM input data in order to evaluate their contribution on the estimation accuracy. For this purpose, several LSTMs are trained with different input data normalizations.
to study the impact of the local navigation frame rotation on the estimation accuracy. Rotating the local navigation frame during the training step allows having similar variation ranges along the three axes, especially for the lateral position, which is extremely small compared to the two other axes. This method shares the same goals as normalization but without any information loss.
to examine the influence of inertial sensor models on estimation accuracy. For this purpose, two inertial sensor error models are studied in order to evaluate their influence on LSTM predictions.
to compare the LSTM accuracy to a Dead-Reckoning, performed on finned mortar trajectory. Estimation methods are evaluated through error criteria based on the Root Mean Square Error and the impact point error.
The outline of the paper is as follows. Section 2 presents an introduction to projectile navigation, AI applications in the military field and to the LSTM basics. Section 3 focuses on the projectile trajectory dataset and LSTM specifications. Section 4 presents the data pre-processing; the input data normalization and the local navigation frame rotation during training. Finally, Section 5 presents projectile trajectory estimation results by analyzing the influence of the local navigation frame rotation, input data normalization and sensor model on estimation accuracy.

2. Related Work

This part presents the traditional projectile navigation methods and the sensors used, the applications of artificial intelligence for defense and the LSTM operating principle.

2.1. Model-Based Projectile Trajectory Estimation

Projectile navigation requires sensors able to resist to extreme conditions (acceleration shocks around 50,000 g along the longitudinal axis, high rotation rates around 15,000 rpm for a 155 mm shell) as well as to be relatively small and inexpensive due to the space and cost constraints imposed on projectiles [22,23,24]. For this purpose, projectile navigation mainly uses IMUs (Inertial Measurement Units) composed by accelerometers, gyrometers, and magnetometers as well as GNSS (Global navigation satellite system) data. On one hand, the IMU measurement integration, performed at high frequency (∼1000 Hz), provides an accurate short-term projectile trajectory estimation, but deviates at long term due to sensor drift [1]. On the other hand, GNSS receivers provide accurate long-term position information at a significantly lower frequency (∼10 Hz), but can be easily spoofed and jammed [5,6,7]. Due to their evident complementarity, IMUs and GNSS are classically fused by different types of Kalman filters for trajectory estimation, as in [2,3,4].
One challenge with high-speed spinning projectiles is to accurately estimate altitudes. Depending on projectile specifications, different methods can be considered, as in [25,26] by exploiting GNSS signals or in [9,22] by using accelerometers, gyrometers or magnetometers. Nevertheless gyrometers are commonly omitted because, under cost limits, many of them saturate and do not resist to launch phases. Therefore, magnetometers, less expensive and able to resist to high accelerations, are often used [27,28,29] with different kinds of Kalman filters (Extended Kalman Filter, Adaptive Extended Kalman Filter, Mixed Extended-Unscented Filter).
As mentioned in the introduction (see Section 1), GNSS signals are easily spoofed and jammed. Therefore, more and more, approaches are proposed in a GNSS-denied environment in order to estimate a projectile trajectory, using only inertial measurements, as in [8,30,31,32].

2.2. AI-Based Trajectory Estimation

AI methods are increasingly used in the military field especially for:
  • Surveillance and target recognition where machine learning algorithms applied to computer vision detect, identify and track objects of interest.
    For example, the Maven project, presented by the US Department of Defense (DoD), focuses on automatic target identification and localization from images collected by Aerial Vehicles [11].
  • Predictive maintenance to establish the optimal time to change a part of a system, as the US Army does on F-16 aircraft [12].
  • Military training where AI is used in training simulation software to improve efficiency through various scenarios, such as AIMS (Artificial Intelligence for Military Simulation) [13].
  • Analysis and decision support to extract and deal with relevant elements in an information flow, to analyze a field or to predict events. The Defense Advanced Research Projects Agency (DARPA) aims to equip US Army helicopter pilots with augmented reality helmets to support them in operations [14].
  • Cybersecurity, as military systems are strongly sensitive to cyberattacks leading to loss and theft of critical information. To this end, the DeepArmor program from SparkCognition uses AI to protect, detect and block networks, computer programs and data from cyber threats [15].
Although widely integrated in the military development programs, AI is relatively uncommonly applied to projectile trajectory estimation although some kinds of networks such as recurrent networks are perfectly adapted to this task.

2.2.1. Recurrent Neural Networks (RNNs)

Recurrent Neural Networks (RNNs) are a class of neural networks perfectly adapted for time series prediction [19,20] as they exploit feedback loops, i.e., an RNN cell output at the previous time is used as input at the current time.
As illustrated in Figure 1, an RNN exploits a time series x = [ x 0 , x 1 , , x τ ] R τ × F of length τ with F input features to predict an output y. Each prediction y t is determined by one RNN cell from the current input x t and the previous output h t 1 , also called hidden state, which memorizes the past information [19,20].
Vanilla RNN, the simplest RNN structure, suffers from gradient vanishing and explosion during the training step [33,34]. In the gradient vanishing case, backpropagation from the last layer to the first layer leads to a gradient reduction. Then, the first layer weights are no longer updated during training and the Vanilla RNN does not learn any features. In the gradient explosion case, gradients become increasingly large leading to huge weight updates and thus resulting in Vanilla RNN divergence.
Moreover, to predict y t at timestamp t, the Vanilla RNN uses only the input x t at the current time and the hidden state h t 1 at the previous time, containing short-term past features. For this reason, Vanilla RNN is ineffective to memorize long-term past events. To overcome these issues, memory cells are added to the Vanilla RNN, forming the Long Short-Term Memory (LSTM) [19].

2.2.2. Long Short-Term Memory Cell

Based on the recurrent network overview presented above, this paragraph focuses only on the LSTM cell operating principle. An LSTM is composed by several cells to deal with short and long-term memory. As shown in Figure 2, to predict y t at timestamp t, an LSTM uses the input data x t at the current time, the hidden state h t 1 at the previous time to memorize short-term past events, and the memory cell state c t 1 at the previous time to memorize the long-term past events. An LSTM cell is composed by three gates:
the forget gate filters, through a Sigmoid function σ , data contained in the concatenation of x t and h t 1 . Data are forgotten for values close to 0 and are memorized for values close to 1. The forget gate model is:
f t = σ ( W f . [ h t 1 , x t ] + b f )
the input gate extracts relevant information from [ h t 1 , x t ] by applying a Sigmoid σ and a Tanh function. The input gate is represented by the following:
i t = σ ( W i . [ h t 1 , x t ] + b f ) C ˜ t = t a n h ( W c . [ h t 1 , x t ] + b c ) .
The memory cell c t is updated from the forget gate f t and the input gate  i t and C ˜ t , to memorize pertinent data:
c t = f t × c t 1 + i t × C ˜ t
the output gate defines the next hidden state h t containing information about previous inputs. The hidden state h t is updated with the memory cell c t normalized by a Tanh function and [ h t 1 , x t ] normalized by a Sigmoid function:
h t = σ ( W h . [ h t 1 , x t ] + b h ) × t a n h ( c t )
with W ( . ) and b ( . ) , the different gate weights and biases.
Currently, few works have appied recurrent networks in the military context. They are commonly used for aircraft navigation [16,17], vehicle trajectory estimation [35], maritime route prediction [36] or human motion prediction [18,37]. It is, however, interesting to mention [38], which focused on projectile trajectory estimation based on LSTMs trained from incomplete and noisy radar measurements.

3. Problem Formulation

This part presents the projectile fire simulation dataset generated by BALCO (BALlistic COde) [21] and the LSTM input data used to estimate projectile trajectories.

3.1. The Projectile Trajectory Dataset BALCO (BALlistic COde)

Results reported in this paper exploit a projectile fire dataset generated by BALCO [21,39], i.e., a high fidelity projectile trajectory simulator based on motion equations with six to seven degrees of freedom and discretized by a seventh order Runge-Kutta method. BALCO enables the consideration of different earth models (flat earth, spherical, ellipsoidal), different atmospheric models (standard atmosphere or defined by the user) or different aerodynamic models (axisymmetric or non-axisymmetric projectiles, aerodynamic coefficients described in correspondence tables or polynomials). BALCO accuracy is validated in comparison to the reference program PRODAS (Projectile Rocket Ordnance Design and Analysis System) by considering different projectile types, various initial conditions and different meteorological conditions.
In order to estimate projectile trajectories, three reference frames are considered.
The local navigation frame n (black frame in Figure 3) tangent to the Earth and assumed fixed during the projectile flight.
The body frame b (red frame in Figure 3), which is an ideal hypothetical frame placed exactly at the projectile gravity center, in which the IMU must be placed, providing perfect inertial measurements.
The sensor frame s (green frame in Figure 3) rigidly fixed to the projectile and misaligned with the projectile gravity center, considered as the frame where the inertial measurements are performed.
Results reported in this paper are applied to the estimation of a finned mortar trajectory. The finned projectile dataset, generated by BALCO, includes 5000 fire simulations and where each one includes:
  • the inertial measurements in the body frame b and in the sensor frame s, i.e., gyrometer ω R 3 , accelerometer a R 3 and magnetometer h R 3 measurements. Three kinds of inertial measurements are available:
    the Perfect IMU measurements performed in the body frame b (red frame in Figure 3), in the ideal case where all the three inertial sensors are perfectly aligned with the projectile gravity center and where no sensor default model is taken into account providing ideal inertial measurements, i.e., without any noise or bias. These measurements are not exploited in this work but are necessary to provide realistic inertial data.
    the IMU measurements performed in the sensor frame s (green frame in Figure 3): issued from the Perfect IMU measurements where a sensor error model is added. This error model, specific to each sensor axis, includes a misalignment, a sensitivity factor, a bias and a noise (assumed zero mean white Gaussian noise). Thus, this measurement accurately models an IMU embedded in a finned projectile.
    the IMU DYN measurements performed in the sensor frame s (green frame in Figure 3): issued from IMU measurements to which a transfer function is added to each sensor. For each sensor, IMU DYN measurements are modeled by:
    y s e n s o r , I M U D Y N = 1 1 + a s + b s 2 y s e n s o r , I M U
    with y s e n s o r , I M U the IMU measurements of the considered sensor, y s e n s o r , I M U D Y N the corresponding IMU DYN measurements and with a and b, the coefficients of the sensor transfer function defined via BALCO. This sensor model allows to model the response of the three sensors over the operating range.
  • the magnetic field reference h n R 3 in the local navigation frame n, assumed constant during the projectile flight.
  • flight parameters, which are, in the case of a finned projectile, the fin angle δ f to stabilize projectiles, the initial velocity v 0 at barrel exit and the barrel elevation angle α , relatively important to obtain ballistic trajectories with short ranges.
  • a time vector k Δ t where Δ t is the IMU sampling period: Δ t = 1 × 10 3 s .
  • the reference trajectory, i.e., the projectile position p R 3 , velocity v R 3 and Euler angles Ψ R 3 in the local navigation frame n at the IMU frequency. This trajectory is used to evaluate the LSTMs accuracy and to compute errors.

3.2. Data Characteristics and LSTM Requirements

The LSTM predictions at time t are obtained from three-dimensional input data of size ( B a t c h s i z e , S e q l e n , I n F e a t u r e s ) , with B a t c h s i z e the number of sequences considered, S e q l e n the number of time steps in the sequence and I n F e a t u r e s the number of features describing each time step. The input features are I n F e a t u r e s = ( M , P , T ) R 16 , such as the following:
-
M R 12  the inertial data, including IMU or IMU DYN measurements in the sensor frame s and the reference magnetic field h n R 3 in the local navigation frame n presented in Section 3.1,
-
P R 3  the flight parameters. In the case of a finned projectile, the three flight parameters are the fin angle δ f , the initial velocity v 0 and the the barrel elevation angle α .
-
T R 1  the time vector, such as T = k Δ t with k the considered time step and Δ t the IMU sampling period.
Various LSTMs are trained and differ depending on the output features learned. Indeed, according to the input data of size ( B a t c h s i z e , S e q l e n , I n F e a t u r e s ) , LSTMs estimate a projectile trajectory modeled by a vector of size ( B a t c h s i z e , O u t F e a t u r e s ) and where O u t F e a t u r e s represents the number of output features. The output features O u t F e a t u r e s are 9 or 3, depending on the type of LSTM considered. Thus, the following notations are used:
-
L S T M A L L trained to estimate 9 output features which are the projectile position p R 3 , velocity v R 3 and Euler angles Ψ R 3 in the navigation frame n.
-
L S T M P O S trained to estimate 3 output features, which are the projectile position p R 3 expressed in the navigation frame n.
-
L S T M V E L trained to estimate 3 output features which are the projectile velocity v R 3 expressed in the navigation frame n.
-
L S T M A N G trained to estimate 3 output features which are the projectile Euler angles Ψ R 3 in the navigation frame n.

4. LSTM Input Data Preprocessing

This section details the two input data pre-processing methods in order to study their influence on estimation accuracy. To manage the different projectile dynamics along the three navigation axes, two data preprocessing methods are investigated: the LSTM input data normalization and local navigation frame rotation allowing to rescale each component of a 3D value on similar variation ranges. To this end, LSTMs presented in Section 3.2 are declined in 8 versions, reported in Table 1, to study the influence of the Min/Max M M ( . ) and the Standard Deviation S T D ( . ) normalization, and the local navigation frame rotation on estimation accuracy.

4.1. LSTM Input Data Normalization

Network input data normalization is a preprocessing data approach to rescale input data on similar variation ranges while preserving the same distribution and ratios as the original data. Input data normalization is used to prevent some input data features from having a greater influence than other features during training and to improve gradient backpropagation convergence. In other words, input data with different ranges can lead to lower network estimation performance. The small input values have a small influence during prediction, and therefore the network weights are updated according to the high input values, which can lead to a significant network weight update and therefore a slower network convergence or the network convergence to a local minimum.
According to Table 1, two kinds of normalization are used:
  • Min/Max normalization M M ( . ) x M M = x x m i n x m a x x m i n with x m a x and x m i n the maximum and minimum of x.
  • Standard Deviation normalization S T D ( . ) x S T D = x μ x σ x with x the quantity to normalize, μ x = μ ( x ) its mean and σ x = σ ( x ) its standard deviation. Thus, x S T D is a quantity with a zero-mean and a standard deviation of one.
In order to study the impact of input data normalization on estimation accuracy, versions V 2 and V 4 use normalization by features while versions V 3 , V 5 , V 7 and V 8 use normalization for all features. Moreover, the normalization factors x m a x , x m i n , μ x and σ x are computed before the training step on the training dataset, as in the following:
x m a x = 1 N s i m i = 1 N s i m m a x χ i , x m i n = 1 N s i m i = 1 N s i m m i n χ i
μ x = 1 N s i m i = 1 N s i m μ χ i , σ x = 1 N s i m i = 1 N s i m μ χ i
with N s i m the number of simulation in the training dataset and with χ i the considered quantities of the simulation n° i, which are χ i = M P T for versions V 3 , V 5 , V 7 and V 8 , and χ i = M or χ i = P or χ i = T for versions V 2 and V 4 .

4.2. Local Navigation Frame Rotation

The local navigation frame rotation aims to rotate the local navigation frame n by a fixed angle γ (local rotated navigation frame n γ ), such as x γ = R γ x with x R 3 defined in n, x γ R 3 expressed in n γ and R γ S O ( 3 ) the transition matrix from the local navigation frame n to the local rotated navigation frame n γ as in the following:
R γ = [ c o s ( γ ) s i n ( γ ) 0 s i n ( γ ) c o s ( γ ) 0 0 0 1 ]   [ c o s ( γ ) 0 s i n ( γ ) 0 1 0 s i n ( γ ) 0 c o s ( γ ) ]   [ 1 0 0 0 c o s ( γ ) s i n ( γ ) 0 s i n ( γ ) c o s ( γ ) ] .
The navigation frame rotation allows a quantity x R 3 expressed in the navigation frame n to modify its three components in order to have a similar amplitude order for the three components. The local navigation frame rotation provides similar variation ranges of a quantity along the three axes. This approach is used to ensure that the LSTMs adequately estimate a quantity with small magnitudes along one axis, even though the variations are considerably larger along the other two axes.
As illustrated in Figure 4, the variation range of the projectile position along the y-axis is significantly smaller than along the x and z-axes and thus, the expressed position in the local rotated navigation frame n γ provides similar amplitudes along the three axes. For example, projectile position variation ranges along the x and z-axes are around several kilometers, while the position along the y-axis varies by a few meters. As illustrated in Figure 4, expressed position in the rotated navigation frame n γ provides similar amplitudes along the three axes.
All quantities expressed in the local navigation frame are rotated, i.e., the projectile position p, velocity v and Euler angles Ψ . Moreover, the angle γ is fixed for all trajectories in the dataset and is determined according to the data used in this paper and is the same to express the position, velocity or Euler angles in the rotated navigation frame. This angle is determined according to the data used in this paper in particular to have similar magnitudes for the three positions, as for the velocity. During the training step, labels are expressed in the local rotated navigation frame n γ and LSTMs predict trajectories in n γ . During the test step, LSTMs estimate projectile trajectories in the local rotated navigation frame n γ , and then, estimations are moved back to the initial local navigation frame n.

5. Results and Analysis

This part of the paper reports LSTM results applied to finned projectiles. A first section focuses on the influence of the normalization and the local navigation frame rotation on the estimation accuracy for short training. A second section validates LSTMs on a large dataset by focusing on the impact of the local navigation frame rotation and the IMU model.
The LSTMs’ performances are evaluated in comparison to a classical navigation algorithm, i.e., a Dead-Reckoning. This algorithm integrates gyrometer ω and accelerometer a measurements to estimate at each discrete time k:
R k = R k 1 [ ω k Δ t ] × , v k = v k 1 + R k 1 a k + g Δ t , p k = p k 1 + v k 1 Δ t + 1 2 R k 1 a k + g Δ t 2 ,
with R k S O ( 3 ) the rotation matrix from the sensor frame s to the local navigation frame n, g R 3 the constant gravity vector, p k R 3 and v k R 3 , respectively, the projectile position and velocity, and [ . ] × the skew matrix. This algorithm is generally used for Kalman filters for the prediction step to estimate trajectory, as presented in [2,3,4,8,24,40].

5.1. Impact of the Input Data Normalization and the Local Navigation Frame Rotation

This section reports the estimated trajectories of a finned projectile according to L S T M A L L , L S T M P O S , L S T M V E L and L S T M A N G , each declined in the 8 versions V 1 V 8 presented in Section 3.2. The training parameters are summarized in the Table 2.
S e q l e n corresponds to 20 samples representing a window of 0.02   s as the sensor sampling period is Δ t = 1 × 10 3   s . This parameter is adjusted according to the input data used.

5.1.1. Qualitative Validation: One Finned Projectile Fire Simulation

Figure 5 focuses on the estimated positions and orientation for one projectile shot. For readability reasons, three estimation methods are first compared: the Dead-Reckoning (9), L S T M A L L , V 1 and L S T M A L L , V 6 (local navigation frame rotation).
As shown in Figure 5, positions estimated by the LSTMs are significantly more accurate than the Dead-Reckoning. Nevertheless, LSTMs are only accurate in estimating the pitch and yaw angle. Errors in the roll angle estimation are due to the finned projectile rotation rate. The LSTMs fail to fully capture the roll angle dynamics. Moreover, the local navigation frame rotation improves projectile position estimation but slightly degrades pitch angle estimation.

5.1.2. Quantitative Evaluation: Analysis on the Whole Test Dataset

To validate the previous observations, L S T M A L L , P O S , V E L , A N G , V 1 8 are evaluated on the N s i m simulations in the test dataset according to two criteria based on the Root Mean Square Error (RMSE) defined as R M S E x = 1 N k = 1 N x k , r e f x ^ k 2 , with x ^ the estimate, x r e f the reference and N the number of samples for one simulation.
The two evaluation criteria are as follows:
  • Success Rate C 1 : number of simulations where a LSTM RMSE is strictly smaller than the Dead-Reckoning.
  • Error Rate C 2 : percentage of LSTM error compared to Dead-Reckoning errors.
    C 1 = k = 1 N s i m R M S E L S T M < R M S E D R , C 2 = 100 N s i m k = 1 N s i m R M S E L S T M R M S E L S T M + R M S E D R
The position, velocity and orientation success rate C 1 and the error rates C 2 are presented in Figure 6, Figure 7 and Figure 8. Figure 5, Figure 6, Figure 7 and Figure 8 has been modified for better readability.
Position analysis results (see Figure 6): The LSTMs outperform Dead-Reckoning for position estimation, especially along the y-axis. Normalizations affect position estimates differently as S T D ( T ) , S T D ( M ) , S T D ( P ) ( V 4 ) and M M ( T , M , P ) ( V 3 ) are not appropriate to this application. In addition, the normalization leads to less accuracy as it implies a loss of information. Finally, rotating the local navigation frame improves the accuracy according to C 2 along the three axis.
Veclocity analysis results (see Figure 7): As previously, the LSTMs clearly outperform Dead-Reckoning for velocity estimation. Specialized networks L S T M V E L are a bit better than L S T M A L L . The STD normalization for all features V 5 exhibits the best results among the different normalization options investigated, especially for velocity along the z-axis. Moreover, rotating the local navigation frame V 6 significantly improves the projectile velocity estimation along all the three axes.
Euler angles analysis results (see Figure 8): The LSTMs deteriorate the roll ϕ angle estimation compared to the Dead-Reckoning, but accurately estimate the yaw angle ψ . As previously, the S T D ( T , M , P ) ( V 5 ) normalization of L S T M A L L exhibits the best performances for the three Euler angles estimation as well as the local navigation frame rotation.
In summary, the LSTM end-to-end estimation is particularly appropriate for projectile position and velocity estimation. Moreover, from this evaluation study, it can be concluded that specialized networks do not significantly improve the estimation accuracy and S T D ( T , M , P ) ( V 5 ) normalization is more appropriate to estimate a projectile trajectory compared to other normalizations. Finally, rotating the local navigation frame is an efficient method to optimize projectile position and velocity estimation.

5.2. Impact of Inertial Measurement Type and Local Navigation Frame Rotation on Estimation Accuracy

The dataset presented in Section 3.1 contains two kinds of inertial readings; IMU measurement, used so far, and IMU DYN measurement, where sensors are characterized by a 2nd order model.
This section focuses on the impact of inertial data and navigation frame rotation on LSTM estimation accuracy. To this end, four LSTMs are trained to estimate positions, velocities and orientations of a finned projectile: L S T M I M U , V 1 (no rotation), L S T M I M U D Y N , V 1 (no rotation), L S T M I M U , V 6 (navigation frame rotation) and L S T M I M U D Y N , V 6 (navigation frame rotation). LSTM specifications are given in Table 3.

5.2.1. Impact of the Local Navigation Frame Rotation and IMU Measurement

This section focuses on L S T M I M U , V 1 (no rotation), L S T M I M U , V 6 (rotation) and the Dead-Reckoning algorithm (9) for position, velocity and Euler angles estimation. Network characteristics are presented in Table 3.
Figure 9 presents the average error distributions e ¯ and the corresponding standard deviations σ evaluated for positions and Euler angles, with the three estimation methods considered, such as
e ¯ = 1 N k = 1 N x r e f x ^ ,
σ = 1 N k = 1 N [ x r e f x ^ ] e ¯ 2 ,
where x r e f is the reference, x ^ the estimate and N the number of samples in the considered simulation.
According to Figure 9, the Dead-Reckoning mean error dispersion (red) is very large compared to those of LSTMs (green and blue). Thus, LSTMs are perfectly adapted to estimate finned projectile positions and velocities. Focusing on p z , v x and v z , the L S T M I M U , V 1 average errors are not centered on zero compared to L S T M I M U , V 6 (rotation). Thus, the local navigation frame rotation improves these estimates. As previously observed, the LSTMs fail to estimate the projectile roll angle even if the finned projectile rotation rate is low. Furthermore, the centering and dispersion of angle errors indicate that the LSTMs suffer to estimate the projectile orientation despite the yaw angle accuracy.

5.2.2. Impact of the Local Navigation Frame Rotation and IMU DYN Measurement

Figure 10 presents the average error e ¯ (12) distributions for positions, velocities, and Euler angles estimated by L S T M I M U D Y N , V 1 (no rotation), L S T M I M U D Y N , V 6 (rotation), and the Dead-Reckoning (9).
According to Figure 10, the LSTMs accurately estimate the projectile position and velocity over a large dataset, despite dynamic inertial data causing, in contrast, the Dead-Reckoning divergence. Furthermore, the error distribution centering analysis allows to conclude that the local navigation frame rotation improves the estimation of p y , v x and v y . As for previous orientation estimation results, both LSTMs fail to estimate the projectile roll angle ϕ .

5.2.3. Evaluation Metric

The performance of L S T M I M U V 1 , L S T M I M U V 6 , L S T M I M U D Y N V 1 , L S T M I M U D Y N V 6 , and the Dead-Reckoning (9) are evaluated using two evaluation criteria computed for each simulation in the test dataset:
  • Mean Absolute Error:
    M A E = 1 N k = 1 N x x ^
    with x the reference, x ^ the estimate N, the number of samples.
  • SCORE: Number of simulations in the test dataset where the considered method obtains the smallest RMSE.
The average of each criteria are evaluated on the dataset as:
C χ = 1 N s i m k = 1 N s i m χ k
with χ the selected evaluation criterion and N s i m the number of simulations in the test dataset.
C M A E Criterion analysis: Figure 11 presents the MAE average C M A E , evaluated on the whole test dataset for L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
The LSTMs accurately estimate position and velocity, both with IMU and IMU DYN measurement, with errors around a few meters for the positions. This criterion confirms that the local navigation frame rotation improves p z , v x and v z estimation for LSTMs trained with IMU measurement, and p y , v x and v y estimation for LSTMs trained with IMU DYN measurement. As expected, LSTMs are not adapted to estimate the projectile roll angle contrary to the pitch and the yaw angle.
C S C O R E Criterion analysis: Figure 12 presents the score, C S C O R E , evaluated on the whole test dataset for L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
According to Figure 12, an LSTM is an accurate approach to estimate projectile position and velocity in a GNSS-denied environment. However, a LSTM is not the optimal method for the orientation estimation. Furthermore, an LSTM is able to generalize the learned features over a large projectile fire dataset as well as learn and predict trajectories from different sensor models. Finally, C M A E and C S C O R E analysis confirms that the local navigation frame rotation is an appropriate method to optimize p z , v x and v z for LSTMs trained with IMU measurement, and p y , v x , and v y for LSTMs trained with IMU DYN measurement.

5.2.4. Errors at Impact Point

This section focuses on the errors at the impact point, i.e., position errors ( p x , p y ) at the final time of a shot. Figure 13a shows the impact point errors of LSTMs with IMU and IMU DYN measurements and the Dead-Reckoning. Figure 13b presents where the impact point errors are located in different error zones.
Whatever the inertial sensor model, The Dead-Reckoning impact point errors are greater than 100 m, contrary to LSTMs. Focusing on IMU measurements, the majority of L S T M I M U , V 6 (rotation) impact point errors are less than 5 m contrary to L S T M I M U , V 1 (no rotation) where errors are lower than 20 m. Thus, the local navigation frame rotation allows to strongly reduce errors at the impact point. Focusing on IMU DYN measurements, the local navigation frame rotation deteriorates estimation accuracy. Indeed, 266 simulations have impact point errors less than 5 m with L S T M I M U D Y N , V 1 while 180 simulations have impact point errors less than 5 m with L S T M I M U D Y N , V 6 .
In summary, an LSTM is an accurate approach to estimate projectile position as errors are less than ten meters in a GNSS-denied environment. These results are comparable to those obtained by commercial GNSS-guided mortars. Moreover, the local navigation frame rotation is useful with IMU measurement and allows to minimize position errors.

6. Conclusions

This paper presents a deep learning approach to estimate a gun-fired projectile trajectory in a GNSS-denied environment. Long-Short-Term-Memories (LSTMs) are trained from the embedded IMU, the magnetic field reference, flight parameters specific to the projectile (the fin angle, the initial velocity, the barrel elevation angle) and a time vector. The impact of three preprocessing methods are analyzed: the input data normalizations, the local navigation frame rotation and the inertial sensor model. According to the reported results, the LSTMs accurately estimate projectile positions and velocities compared to a conventional navigation algorithm as errors are around ten meters, similar to the GNSS-guided projectile accuracy. Nevertheless, LSTM suffers in the estimation of the projectile orientation, especially for the high dynamic roll angle. The input data normalization provides no interesting results while the local navigation frame rotation optimizes the position and velocity estimation. Moreover, the results prove that the LSTMs generalize the learned features on large datasets independently of the inertial sensor model considered. Based on results reported in this paper, the next step is to implement a deep Kalman filter by considering the LSTM predictions as observations.

Author Contributions

Methodology, A.R.; Validation, A.R.; Investigation, A.R.; Writing—original draft, A.R.; Supervision, S.C., J.W. and J.-P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Groves, P.D. Principles of GNSS, inertial, and multisensor integrated navigation systems. IEEE Aerosp. Electron. Syst. Mag. 2015, 30, 26–27. [Google Scholar] [CrossRef]
  2. Zhao, H.; Li, Z. Ultra-tight GPS/IMU integration based long-range rocket projectile navigation. Def. Sci. J. 2016, 66, 64–70. [Google Scholar] [CrossRef]
  3. Fairfax, L.D.; Fresconi, F.E. Loosely-coupled GPS/INS state estimation in precision projectiles. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 620–624. [Google Scholar]
  4. Wells, L.L. The projectile GRAM SAASM for ERGM and Excalibur. In Proceedings of the IEEE 2000. Position Location and Navigation Symposium (Cat. No. 00CH37062), San Diego, CA, USA, 13–16 March 2000; pp. 106–111. [Google Scholar]
  5. Duckworth, G.L.; Baranoski, E.J. Navigation in GNSS-denied environments: Signals of opportunity and beacons. In Proceedings of the NATO Research and Technology Organization (RTO) Sensors and Technology Panel (SET) Symposium; 2007; pp. 1–14. [Google Scholar]
  6. Ruegamer, A.; Kowalewski, D. Jamming and spoofing of gnss signals—An underestimated risk?! Proc. Wisdom Ages Chall. Mod. World 2015, 3, 17–21. [Google Scholar]
  7. Schmidt, D.; Radke, K.; Camtepe, S.; Foo, E.; Ren, M. A survey and analysis of the GNSS spoofing threat and countermeasures. ACM Comput. Surv. (CSUR) 2016, 48, 1–31. [Google Scholar] [CrossRef]
  8. Roux, A.; Changey, S.; Weber, J.; Lauffenburger, J.P. Projectile trajectory estimation: Performance analysis of an Extended Kalman Filter and an Imperfect Invariant Extended Kalman Filter. In Proceedings of the 2021 9th International Conference on Systems and Control (ICSC), Caen, France, 24–26 November 2021; pp. 274–281. [Google Scholar]
  9. Combettes, C.; Changey, S.; Adam, R.; Pecheur, E. Attitude and velocity estimation of a projectile using low cost magnetometers and accelerometers. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 650–657. [Google Scholar]
  10. Fiot, A.; Changey, S.; Petit, N. Attitude estimation for artillery shells using magnetometers and frequency detection of accelerometers. Control. Eng. Pract. 2022, 122, 105080. [Google Scholar] [CrossRef]
  11. Tarraf, D.C.; Shelton, W.; Parker, E.; Alkire, B.; Carew, D.G.; Grana, J.; Levedahl, A.; Léveillé, J.; Mondschein, J.; Ryseff, J.; et al. The Department of Defense Posture for Artificial Intelligence: Assessment and Recommendations; Technical Report; Rand National Defense Research Institute: Santa Monica, CA, USA, 2019. [Google Scholar]
  12. Zeldam, S. Automated Failure Diagnosis in Aviation Maintenance Using Explainable Artificial Intelligence (XAI). Master’s Thesis, University of Twente, Enschede, The Netherlands, 2018. [Google Scholar]
  13. Ustun, V.; Kumar, R.; Reilly, A.; Sajjadi, S.; Miller, A. Adaptive synthetic characters for military training. arXiv 2021, arXiv:2101.02185. [Google Scholar]
  14. Svenmarck, P.; Luotsinen, L.; Nilsson, M.; Schubert, J. Possibilities and challenges for artificial intelligence in military applications. In Proceedings of the NATO Big Data and Artificial Intelligence for Military Decision Making Specialists’ Meeting, Bordeaux, France, 30 June–1 July 2018; pp. 1–16. [Google Scholar]
  15. Ventre, D. Artificial Intelligence, Cybersecurity and Cyber Defence; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  16. Shi, Z.; Xu, M.; Pan, Q.; Yan, B.; Zhang, H. LSTM-based flight trajectory prediction. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  17. Gaiduchenko, N.E.; Gritsyk, P.A.; Malashko, Y.I. Multi-Step Ballistic Vehicle Trajectory Forecasting Using Deep Learning Models. In Proceedings of the 2020 International Conference Engineering and Telecommunication (En&T), Dolgoprudny, Russia, 25–26 November 2020; pp. 1–6. [Google Scholar]
  18. Al-Molegi, A.; Jabreel, M.; Ghaleb, B. STF-RNN: Space time features-based recurrent neural network for predicting people next location. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–7. [Google Scholar]
  19. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  20. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  21. Wey, P.; Corriveau, D.; Saitz, T.A.; de Ruijter, W.; Strömbäck, P. BALCO 6/7-DoF trajectory model. In Proceedings of the 29th International Symposium on Ballistics, Edinburgh, Scotland, 9–13 May 2016; Volume 1, pp. 151–162. [Google Scholar]
  22. Liu, F.; Su, Z.; Zhao, H.; Li, Q.; Li, C. Attitude measurement for high-spinning projectile with a hollow MEMS IMU consisting of multiple accelerometers and gyros. Sensors 2019, 19, 1799. [Google Scholar] [CrossRef] [PubMed]
  23. Rogers, J.; Costello, M. Design of a roll-stabilized mortar projectile with reciprocating canards. J. Guid. Control Dyn. 2010, 33, 1026–1034. [Google Scholar] [CrossRef]
  24. Jardak, N.; Adam, R.; Changey, S. A Gyroless Algorithm with Multi-Hypothesis Initialization for Projectile Navigation. Sensors 2021, 21, 7487. [Google Scholar] [CrossRef] [PubMed]
  25. Radi, A.; Zahran, S.; El-Sheimy, N. GNSS Only Reduced Navigation System Performance Evaluation for High-Speed Smart Projectile Attitudes Estimation. In Proceedings of the 2021 International Telecommunications Conference (ITC-Egypt), Alexandria, Egypt, 13–15 July 2021; pp. 1–4. [Google Scholar]
  26. Aykenar, M.B.; Boz, I.C.; Soysal, G.; Efe, M. A Multiple Model Approach for Estimating Roll Rate of a Very Fast Spinning Artillery Rocket. In Proceedings of the 2020 IEEE 23rd International Conference on Information Fusion (FUSION), Rustenburg, South Africa, 6–9 July 2020; pp. 1–7. [Google Scholar]
  27. Changey, S.; Pecheur, E.; Bernard, L.; Sommer, E.; Wey, P.; Berner, C. Real time estimation of projectile roll angle using magnetometers: In-flight experimental validation. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 371–376. [Google Scholar]
  28. Zhao, H.; Su, Z. Real-time estimation of roll angle for trajectory correction projectile using radial magnetometers. IET Radar Sonar Navig. 2020, 14, 1559–1570. [Google Scholar] [CrossRef]
  29. Changey, S.; Beauvois, D.; Fleck, V. A mixed extended-unscented filter for attitude estimation with magnetometer sensor. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; p. 6. [Google Scholar]
  30. Schmidt, G.T. Navigation sensors and systems in GNSS degraded and denied environments. Chin. J. Aeronaut. 2015, 28, 1–10. [Google Scholar] [CrossRef]
  31. Fiot, A.; Changey, S.; Petit, N.C. Estimation of air velocity for a high velocity spinning projectile using transerse accelerometers. In Proceedings of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 8–12 January 2018; p. 1349. [Google Scholar]
  32. Changey, S.; Fleck, V.; Beauvois, D. Projectile attitude and position determination using magnetometer sensor only. In Proceedings of the Intelligent Computing: Theory and Applications III; SPIE: Orlando, FL, USA, 2005; Volume 5803, pp. 49–58. [Google Scholar]
  33. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  34. Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient flow in recurrent nets: The difficulty of learning long-term dependencies. In A Field Guide to Dynamical Recurrent Neural Networks; IEEE Press: Piscataway, NJ, USA, 2001. [Google Scholar]
  35. Park, S.H.; Kim, B.; Kang, C.M.; Chung, C.C.; Choi, J.W. Sequence-to-sequence prediction of vehicle trajectory via LSTM encoder-decoder architecture. In Proceedings of the 2018 IEEE intelligent vehicles symposium (IV), Changshu, China, 26–30 June 2018; pp. 1672–1678. [Google Scholar]
  36. Sørensen, K.A.; Heiselberg, P.; Heiselberg, H. Probabilistic maritime trajectory prediction in complex scenarios using deep learning. Sensors 2022, 22, 2058. [Google Scholar] [CrossRef] [PubMed]
  37. Barsoum, E.; Kender, J.; Liu, Z. Hp-gan: Probabilistic 3d human motion prediction via gan. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1418–1427. [Google Scholar]
  38. Hou, L.h.; Liu, H.j. An end-to-end lstm-mdn network for projectile trajectory prediction. In Proceedings of the Intelligence Science and Big Data Engineering, Big Data and Machine Learning: 9th International Conference, IScIDE 2019, Nanjing, China, 17–20 October 2019; pp. 114–125. [Google Scholar]
  39. Corriveau, D. Validation of the NATO Armaments Ballistic Kernel for use in small-arms fire control systems. Def. Technol. 2017, 13, 188–199. [Google Scholar] [CrossRef]
  40. Roux, A.; Changey, S.; Weber, J.; Lauffenburger, J.P. Cnn-based invariant extended kalman filter for projectile trajectory estimation using imu only. In Proceedings of the 2021 International Conference on Control, Automation and Diagnosis (ICCAD), Grenoble, France, 3–5 November 2021; pp. 1–6. [Google Scholar]
  41. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. RNN layer overview, roll and unroll: many-to-many representation.
Figure 1. RNN layer overview, roll and unroll: many-to-many representation.
Sensors 23 03025 g001
Figure 2. LSTM cell operating principle composed by three gates.
Figure 2. LSTM cell operating principle composed by three gates.
Sensors 23 03025 g002
Figure 3. Navigation frames (black—local navigation frame n, red—body frame b, green—sensor frame s) and flight parameters for a finned projectile (fin angle δ f , initial velocity v 0 , barrel elevation angle α ).
Figure 3. Navigation frames (black—local navigation frame n, red—body frame b, green—sensor frame s) and flight parameters for a finned projectile (fin angle δ f , initial velocity v 0 , barrel elevation angle α ).
Sensors 23 03025 g003
Figure 4. (a) Local navigation frame n and local rotated navigation frame n γ . (b) Projectile position in the local navigation frame n (blue dashed line), projectile position in the local rotated navigation frame n γ (red solid line).
Figure 4. (a) Local navigation frame n and local rotated navigation frame n γ . (b) Projectile position in the local navigation frame n (blue dashed line), projectile position in the local rotated navigation frame n γ (red solid line).
Sensors 23 03025 g004
Figure 5. Estimated projectile position [m] and Euler angles [rad] obtained by the Dead-Reckoning (green), L S T M A L L , V 1 (blue), L S T M A L L , V 6 (yellow) and the reference (red).
Figure 5. Estimated projectile position [m] and Euler angles [rad] obtained by the Dead-Reckoning (green), L S T M A L L , V 1 (blue), L S T M A L L , V 6 (yellow) and the reference (red).
Sensors 23 03025 g005
Figure 6. Position analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M P O S , V 1 V 8 .
Figure 6. Position analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M P O S , V 1 V 8 .
Sensors 23 03025 g006
Figure 7. Velocity analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M V E L , V 1 V 8 .
Figure 7. Velocity analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M V E L , V 1 V 8 .
Sensors 23 03025 g007
Figure 8. Orientation analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M A N G , V 1 V 8 .
Figure 8. Orientation analysis: (a) Success Rate C 1 , (b) Error Rate C 2 (in %) of L S T M A L L , V 1 V 8 and L S T M A N G , V 1 V 8 .
Sensors 23 03025 g008
Figure 9. Average position, velocity and orientation error histogram obtained by L S T M I M U , V 1 (blue), L S T M I M U , V 6 (green) and Dead-Reckoning (red).
Figure 9. Average position, velocity and orientation error histogram obtained by L S T M I M U , V 1 (blue), L S T M I M U , V 6 (green) and Dead-Reckoning (red).
Sensors 23 03025 g009
Figure 10. Average position, velocity and orientation error histogram obtained by L S T M I M U D Y N , V 1 (blue), L S T M I M U D Y N , V 6 (green) and Dead-Reckoning (red).
Figure 10. Average position, velocity and orientation error histogram obtained by L S T M I M U D Y N , V 1 (blue), L S T M I M U D Y N , V 6 (green) and Dead-Reckoning (red).
Sensors 23 03025 g010aSensors 23 03025 g010b
Figure 11. Position, velocity and orientation MAE average C M A E obtained by L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
Figure 11. Position, velocity and orientation MAE average C M A E obtained by L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
Sensors 23 03025 g011
Figure 12. Position, velocity and orientation score obtained by L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
Figure 12. Position, velocity and orientation score obtained by L S T M I M U , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 1 , L S T M I M U D Y N , V 6 and the Dead-Reckoning.
Sensors 23 03025 g012
Figure 13. (a) Errors at impact point obtained by L S T M I M U , V 1 and L S T M I M U D Y N , V 1 (blue cross), L S T M I M U , V 6 and L S T M I M U D Y N , V 6 (green cross) and the Dead-Reckoning (red dot). (b) Impact point error location.
Figure 13. (a) Errors at impact point obtained by L S T M I M U , V 1 and L S T M I M U D Y N , V 1 (blue cross), L S T M I M U , V 6 and L S T M I M U D Y N , V 6 (green cross) and the Dead-Reckoning (red dot). (b) Impact point error location.
Sensors 23 03025 g013
Table 1. Version specifications: Influence of the normalization and the local navigation frame rotation.
Table 1. Version specifications: Influence of the normalization and the local navigation frame rotation.
NameNormalizationRotation
V 1 NoNo
V 2 M M ( T ) , M M ( M ) , M M ( P ) No
V 3 M M ( T , M , P ) No
V 4 S T D ( T ) , S T D ( M ) , S T D ( P ) No
V 5 S T D ( T , M , P ) No
V 6 NoYes
V 7 M M ( T , M , P ) Yes
V 8 S T D ( T , M , P ) Yes
Table 2. Training characteristics of L S T M A L L , P O S , V E L , A N G , V 1 8 .
Table 2. Training characteristics of L S T M A L L , P O S , V E L , A N G , V 1 8 .
DatasetTraining Dataset:100 Simulations (Validation: 10 Simulations)
Test Dataset:20 Simulations
Input dataBatch size:64 ( S e q l e n : 20 timesteps)
Input data: I n F e a t u r e s = ( M , P , T ) R 16 (with IMU measurements)
Cost function:Mean Squared Error (MSE)
TrainingOptimization algorithm:ADAM [41] (Learning rate : 1 × 10 4 )
LSTM layer:2 (Hidden units: 64–128)
Table 3. Training characteristics of L S T M I M U , V 1 , L S T M I M U D Y N , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 6 .
Table 3. Training characteristics of L S T M I M U , V 1 , L S T M I M U D Y N , V 1 , L S T M I M U , V 6 , L S T M I M U D Y N , V 6 .
Dataset Training Dataset:4000 Simulations (Validation: 400 Simulations)
Test Dataset:400 Simulations
LSTM nameNo normalization L S T M I M U , V 1 (with IMU measurements)
& No rotation L S T M I M U D Y N , V 1 (with IMU DYN measurements)
No normalization L S T M I M U , V 6 (with IMU measurements)
& Rotation L S T M I M U D Y N , V 6 (with IMU DYN measurements)
Input dataBatch size:64 ( S e q l e n : 20 timestamp)
Input data: I n F e a t u r e s = ( M , P , T ) R 16
Cost function:Mean Squared Error (MSE)
TrainingOptimization algorithm:ADAM [41] (Learning rate: 1 × 10 4 )
LSTM layer:2 (Hidden units: 64–128)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roux, A.; Changey, S.; Weber, J.; Lauffenburger, J.-P. LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment. Sensors 2023, 23, 3025. https://doi.org/10.3390/s23063025

AMA Style

Roux A, Changey S, Weber J, Lauffenburger J-P. LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment. Sensors. 2023; 23(6):3025. https://doi.org/10.3390/s23063025

Chicago/Turabian Style

Roux, Alicia, Sébastien Changey, Jonathan Weber, and Jean-Philippe Lauffenburger. 2023. "LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment" Sensors 23, no. 6: 3025. https://doi.org/10.3390/s23063025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop