LSTM-Based Projectile Trajectory Estimation in a GNSS-Denied Environment

This paper presents a deep learning approach to estimate a projectile trajectory in a GNSS-denied environment. For this purpose, Long-Short-Term-Memories (LSTMs) are trained on projectile fire simulations. The network inputs are the embedded Inertial Measurement Unit (IMU) data, the magnetic field reference, flight parameters specific to the projectile and a time vector. This paper focuses on the influence of LSTM input data pre-processing, i.e., normalization and navigation frame rotation, leading to rescale 3D projectile data over similar variation ranges. In addition, the effect of the sensor error model on the estimation accuracy is analyzed. LSTM estimates are compared to a classical Dead-Reckoning algorithm, and the estimation accuracy is evaluated via multiple error criteria and the position errors at the impact point. Results, presented for a finned projectile, clearly show the Artificial Intelligence (AI) contribution, especially for the projectile position and velocity estimations. Indeed, the LSTM estimation errors are reduced compared to a classical navigation algorithm as well as to GNSS-guided finned projectiles.


Introduction
Projectile navigation is mainly based on IMU (Inertial Measurement Unit) and GNSS (Global Navigation Satellite System) measurements due to the high dynamic constraints imposed on projectiles and the low-cost sensor requirements. Classically, IMU and GNSS measurements are combined with Kalman Filters to estimate a trajectory. The IMU measurements are integrated to predict the trajectory in order to be corrected by the GNSS receiver measurements [1][2][3][4]. Nevertheless, GNSS signals are not always available due to the environment configuration and are vulnerable to jamming and spoofing [5][6][7]. For this purpose, users aim to exclude these measurements for trajectory estimation [8][9][10].
Moreover, Artificial Intelligence (AI) is increasingly used for defense applications such as surveillance, reconnaissance, tracking or navigation [11][12][13][14][15]. Indeed, AI is an interesting approach to correct model approximations, to limit the influence of incomplete or incorrect measurements or to determine complex models from system data. Therefore, this paper presents an AI-based projectile trajectory estimation method in a GNSS-denied environment using only the embedded IMU and pre-flight parameters specific to the ammunition.
Considering that a trajectory is a time series, AI provides interesting approaches for its estimation. Indeed, Recurrent Neural Networks (RNNs) are perfectly adapted to time series prediction as illustrated in [16][17][18]. RNNs are composed by feedback loops, i.e., they memorize past data through hidden states to predict future data. However, the simplest form of RNNs exhibits convergence issues during the training step such as vanishing or exploding gradient problems. So another form of recurrent network is considered: the Long Short-Term Memory (LSTM) [19,20]. A LSTM includes a memory cell in addition to hidden states, in order to capture both long-term and short-term time dependencies.
This paper presents an AI-based solution to estimate a projectile trajectory in a GNSSdenied environment. In summary, the main contributions of this work are: • to detail an LSTM-based approach to estimate projectile positions, velocities and Euler angles from the embedded IMU, the magnetic field reference, pre-flight parameters and a time vector. • to present BALCO (BALlistic COde) [21] used to generate the dataset. This simulator provides true-to-life trajectories of several projectiles according to the ammunition parameters. • to investigate different normalization forms of the LSTM input data in order to evaluate their contribution on the estimation accuracy. For this purpose, several LSTMs are trained with different input data normalizations. • to study the impact of the local navigation frame rotation on the estimation accuracy.
Rotating the local navigation frame during the training step allows having similar variation ranges along the three axes, especially for the lateral position, which is extremely small compared to the two other axes. This method shares the same goals as normalization but without any information loss. • to examine the influence of inertial sensor models on estimation accuracy. For this purpose, two inertial sensor error models are studied in order to evaluate their influence on LSTM predictions. • to compare the LSTM accuracy to a Dead-Reckoning, performed on finned mortar trajectory. Estimation methods are evaluated through error criteria based on the Root Mean Square Error and the impact point error.
The outline of the paper is as follows. Section 2 presents an introduction to projectile navigation, AI applications in the military field and to the LSTM basics. Section 3 focuses on the projectile trajectory dataset and LSTM specifications. Section 4 presents the data pre-processing; the input data normalization and the local navigation frame rotation during training. Finally, Section 5 presents projectile trajectory estimation results by analyzing the influence of the local navigation frame rotation, input data normalization and sensor model on estimation accuracy.

Related Work
This part presents the traditional projectile navigation methods and the sensors used, the applications of artificial intelligence for defense and the LSTM operating principle.

Model-Based Projectile Trajectory Estimation
Projectile navigation requires sensors able to resist to extreme conditions (acceleration shocks around 50,000 g along the longitudinal axis, high rotation rates around 15,000 rpm for a 155 mm shell) as well as to be relatively small and inexpensive due to the space and cost constraints imposed on projectiles [22][23][24]. For this purpose, projectile navigation mainly uses IMUs (Inertial Measurement Units) composed by accelerometers, gyrometers, and magnetometers as well as GNSS (Global navigation satellite system) data. On one hand, the IMU measurement integration, performed at high frequency (∼1000 Hz), provides an accurate short-term projectile trajectory estimation, but deviates at long term due to sensor drift [1]. On the other hand, GNSS receivers provide accurate long-term position information at a significantly lower frequency (∼10 Hz), but can be easily spoofed and jammed [5][6][7]. Due to their evident complementarity, IMUs and GNSS are classically fused by different types of Kalman filters for trajectory estimation, as in [2][3][4].
One challenge with high-speed spinning projectiles is to accurately estimate altitudes. Depending on projectile specifications, different methods can be considered, as in [25,26] by exploiting GNSS signals or in [9,22] by using accelerometers, gyrometers or magnetometers. Nevertheless gyrometers are commonly omitted because, under cost limits, many of them saturate and do not resist to launch phases. Therefore, magnetometers, less expensive and able to resist to high accelerations, are often used [27][28][29] with different kinds of Kalman filters (Extended Kalman Filter, Adaptive Extended Kalman Filter, Mixed Extended-Unscented Filter).
As mentioned in the introduction (see Section 1), GNSS signals are easily spoofed and jammed. Therefore, more and more, approaches are proposed in a GNSS-denied environment in order to estimate a projectile trajectory, using only inertial measurements, as in [8,[30][31][32].

AI-Based Trajectory Estimation
AI methods are increasingly used in the military field especially for: • Surveillance and target recognition where machine learning algorithms applied to computer vision detect, identify and track objects of interest. For example, the Maven project, presented by the US Department of Defense (DoD), focuses on automatic target identification and localization from images collected by Aerial Vehicles [11]. • Predictive maintenance to establish the optimal time to change a part of a system, as the US Army does on F-16 aircraft [12]. • Military training where AI is used in training simulation software to improve efficiency through various scenarios, such as AIMS (Artificial Intelligence for Military Simulation) [13]. • Analysis and decision support to extract and deal with relevant elements in an information flow, to analyze a field or to predict events. The Defense Advanced Research Projects Agency (DARPA) aims to equip US Army helicopter pilots with augmented reality helmets to support them in operations [14]. • Cybersecurity, as military systems are strongly sensitive to cyberattacks leading to loss and theft of critical information. To this end, the DeepArmor program from SparkCognition uses AI to protect, detect and block networks, computer programs and data from cyber threats [15].
Although widely integrated in the military development programs, AI is relatively uncommonly applied to projectile trajectory estimation although some kinds of networks such as recurrent networks are perfectly adapted to this task.

Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are a class of neural networks perfectly adapted for time series prediction [19,20] as they exploit feedback loops, i.e., an RNN cell output at the previous time is used as input at the current time.
As illustrated in Figure 1, an RNN exploits a time series x = [x 0 , x 1 , ..., x τ ] ∈ R τ×F of length τ with F input features to predict an output y. Each prediction y t is determined by one RNN cell from the current input x t and the previous output h t−1 , also called hidden state, which memorizes the past information [19,20]. Vanilla RNN, the simplest RNN structure, suffers from gradient vanishing and explosion during the training step [33,34]. In the gradient vanishing case, backpropagation from the last layer to the first layer leads to a gradient reduction. Then, the first layer weights are no longer updated during training and the Vanilla RNN does not learn any features. In the gradient explosion case, gradients become increasingly large leading to huge weight updates and thus resulting in Vanilla RNN divergence.
Moreover, to predict y t at timestamp t, the Vanilla RNN uses only the input x t at the current time and the hidden state h t−1 at the previous time, containing short-term past features. For this reason, Vanilla RNN is ineffective to memorize long-term past events. To overcome these issues, memory cells are added to the Vanilla RNN, forming the Long Short-Term Memory (LSTM) [19].

Long Short-Term Memory Cell
Based on the recurrent network overview presented above, this paragraph focuses only on the LSTM cell operating principle. An LSTM is composed by several cells to deal with short and long-term memory. As shown in Figure 2, to predict y t at timestamp t, an LSTM uses the input data x t at the current time, the hidden state h t−1 at the previous time to memorize short-term past events, and the memory cell state c t−1 at the previous time to memorize the long-term past events. An LSTM cell is composed by three gates: the forget gate filters, through a Sigmoid function σ, data contained in the concatenation of x t and h t−1 . Data are forgotten for values close to 0 and are memorized for values close to 1. The forget gate model is: the input gate extracts relevant information from [h t−1 , x t ] by applying a Sigmoid σ and a Tanh function. The input gate is represented by the following: The memory cell c t is updated from the forget gate f t and the input gate i t andC t , to memorize pertinent data: the output gate defines the next hidden state h t containing information about previous inputs. The hidden state h t is updated with the memory cell c t normalized by a Tanh function and [h t−1 , x t ] normalized by a Sigmoid function: with W (.) and b (.) , the different gate weights and biases. Currently, few works have appied recurrent networks in the military context. They are commonly used for aircraft navigation [16,17], vehicle trajectory estimation [35], maritime route prediction [36] or human motion prediction [18,37]. It is, however, interesting to mention [38], which focused on projectile trajectory estimation based on LSTMs trained from incomplete and noisy radar measurements.

Problem Formulation
This part presents the projectile fire simulation dataset generated by BALCO (BALlistic COde) [21] and the LSTM input data used to estimate projectile trajectories.

The Projectile Trajectory Dataset BALCO (BALlistic COde)
Results reported in this paper exploit a projectile fire dataset generated by BALCO [21,39], i.e., a high fidelity projectile trajectory simulator based on motion equations with six to seven degrees of freedom and discretized by a seventh order Runge-Kutta method. BALCO enables the consideration of different earth models (flat earth, spherical, ellipsoidal), different atmospheric models (standard atmosphere or defined by the user) or different aerodynamic models (axisymmetric or non-axisymmetric projectiles, aerodynamic coefficients described in correspondence tables or polynomials). BALCO accuracy is validated in comparison to the reference program PRODAS (Projectile Rocket Ordnance Design and Analysis System) by considering different projectile types, various initial conditions and different meteorological conditions.
In order to estimate projectile trajectories, three reference frames are considered. The local navigation frame n (black frame in Figure 3) tangent to the Earth and assumed fixed during the projectile flight.
The body frame b (red frame in Figure 3), which is an ideal hypothetical frame placed exactly at the projectile gravity center, in which the IMU must be placed, providing perfect inertial measurements.
The sensor frame s (green frame in Figure 3) rigidly fixed to the projectile and misaligned with the projectile gravity center, considered as the frame where the inertial measurements are performed. Results reported in this paper are applied to the estimation of a finned mortar trajectory. The finned projectile dataset, generated by BALCO, includes 5000 fire simulations and where each one includes: • the inertial measurements in the body frame b and in the sensor frame s, i.e., gyrometer ω ∈ R 3 , accelerometer a ∈ R 3 and magnetometer h ∈ R 3 measurements. Three kinds of inertial measurements are available: the Perfect IMU measurements performed in the body frame b (red frame in Figure 3), in the ideal case where all the three inertial sensors are perfectly aligned with the projectile gravity center and where no sensor default model is taken into account providing ideal inertial measurements, i.e., without any noise or bias. These measurements are not exploited in this work but are necessary to provide realistic inertial data.
the IMU measurements performed in the sensor frame s (green frame in Figure 3): issued from the Perfect IMU measurements where a sensor error model is added. This error model, specific to each sensor axis, includes a misalignment, a sensitivity factor, a bias and a noise (assumed zero mean white Gaussian noise). Thus, this measurement accurately models an IMU embedded in a finned projectile. - the IMU DYN measurements performed in the sensor frame s (green frame in Figure 3): issued from IMU measurements to which a transfer function is added to each sensor. For each sensor, IMU DYN measurements are modeled by: y sensor, I MU DYN = 1 1 + as + bs 2 y sensor, I MU (5) with y sensor, I MU the IMU measurements of the considered sensor, y sensor, I MU DYN the corresponding IMU DYN measurements and with a and b, the coefficients of the sensor transfer function defined via BALCO. This sensor model allows to model the response of the three sensors over the operating range.
• the magnetic field reference h n ∈ R 3 in the local navigation frame n, assumed constant during the projectile flight. • flight parameters, which are, in the case of a finned projectile, the fin angle δ f to stabilize projectiles, the initial velocity v 0 at barrel exit and the barrel elevation angle α, relatively important to obtain ballistic trajectories with short ranges. • a time vector k∆ t where ∆ t is the IMU sampling period: the reference trajectory, i.e., the projectile position p ∈ R 3 , velocity v ∈ R 3 and Euler angles Ψ ∈ R 3 in the local navigation frame n at the IMU frequency. This trajectory is used to evaluate the LSTMs accuracy and to compute errors.

Data Characteristics and LSTM Requirements
The LSTM predictions at time t are obtained from three-dimensional input data of size (B atch size , S eq len , I n Features ), with B atch size the number of sequences considered, S eq len the number of time steps in the sequence and I n Features the number of features describing each time step. The input features are I n Features = (M, P, T ) ∈ R 16 , such as the following: -M ∈ R 12 the inertial data, including IMU or IMU DYN measurements in the sensor frame s and the reference magnetic field h n ∈ R 3 in the local navigation frame n presented in Section 3.1, -P ∈ R 3 the flight parameters. In the case of a finned projectile, the three flight parameters are the fin angle δ f , the initial velocity v 0 and the the barrel elevation angle α.

LSTM Input Data Preprocessing
This section details the two input data pre-processing methods in order to study their influence on estimation accuracy. To manage the different projectile dynamics along the three navigation axes, two data preprocessing methods are investigated: the LSTM input data normalization and local navigation frame rotation allowing to rescale each component of a 3D value on similar variation ranges. To this end, LSTMs presented in Section 3.2 are declined in 8 versions, reported in Table 1, to study the influence of the Min/Max MM(.) and the Standard Deviation STD(.) normalization, and the local navigation frame rotation on estimation accuracy. Table 1. Version specifications: Influence of the normalization and the local navigation frame rotation.

Name
Normalization Rotation

LSTM Input Data Normalization
Network input data normalization is a preprocessing data approach to rescale input data on similar variation ranges while preserving the same distribution and ratios as the original data. Input data normalization is used to prevent some input data features from having a greater influence than other features during training and to improve gradient backpropagation convergence. In other words, input data with different ranges can lead to lower network estimation performance. The small input values have a small influence during prediction, and therefore the network weights are updated according to the high input values, which can lead to a significant network weight update and therefore a slower network convergence or the network convergence to a local minimum.
According to Table 1, two kinds of normalization are used: x STD = x−µ x σ x with x the quantity to normalize, µ x = µ(x) its mean and σ x = σ(x) its standard deviation. Thus, x STD is a quantity with a zero-mean and a standard deviation of one.
In order to study the impact of input data normalization on estimation accuracy, versions V 2 and V 4 use normalization by features while versions V 3 , V 5 , V 7 and V 8 use normalization for all features. Moreover, the normalization factors x max , x min , µ x and σ x are computed before the training step on the training dataset, as in the following: with N sim the number of simulation in the training dataset and with χ i the considered quantities of the simulation nº i, which are χ i = M P T for versions V 3 , V 5 , V 7 and V 8 , and χ i = M or χ i = P or χ i = T for versions V 2 and V 4 .

Local Navigation Frame Rotation
The local navigation frame rotation aims to rotate the local navigation frame n by a fixed angle γ (local rotated navigation frame n γ ), such as x γ = R γ x with x ∈ R 3 defined in n, x γ ∈ R 3 expressed in n γ and R γ ∈ SO(3) the transition matrix from the local navigation frame n to the local rotated navigation frame n γ as in the following: The navigation frame rotation allows a quantity x ∈ R 3 expressed in the navigation frame n to modify its three components in order to have a similar amplitude order for the three components. The local navigation frame rotation provides similar variation ranges of a quantity along the three axes. This approach is used to ensure that the LSTMs adequately estimate a quantity with small magnitudes along one axis, even though the variations are considerably larger along the other two axes.
As illustrated in Figure 4, the variation range of the projectile position along the y-axis is significantly smaller than along the x and z-axes and thus, the expressed position in the local rotated navigation frame n γ provides similar amplitudes along the three axes. For example, projectile position variation ranges along the x and z-axes are around several kilometers, while the position along the y-axis varies by a few meters. As illustrated in Figure 4, expressed position in the rotated navigation frame n γ provides similar amplitudes along the three axes. All quantities expressed in the local navigation frame are rotated, i.e., the projectile position p, velocity v and Euler angles Ψ. Moreover, the angle γ is fixed for all trajectories in the dataset and is determined according to the data used in this paper and is the same to express the position, velocity or Euler angles in the rotated navigation frame. This angle is determined according to the data used in this paper in particular to have similar magnitudes for the three positions, as for the velocity. During the training step, labels are expressed in the local rotated navigation frame n γ and LSTMs predict trajectories in n γ . During the test step, LSTMs estimate projectile trajectories in the local rotated navigation frame n γ , and then, estimations are moved back to the initial local navigation frame n.

Results and Analysis
This part of the paper reports LSTM results applied to finned projectiles. A first section focuses on the influence of the normalization and the local navigation frame rotation on the estimation accuracy for short training. A second section validates LSTMs on a large dataset by focusing on the impact of the local navigation frame rotation and the IMU model. The LSTMs' performances are evaluated in comparison to a classical navigation algorithm, i.e., a Dead-Reckoning. This algorithm integrates gyrometer ω and accelerometer a measurements to estimate at each discrete time k: with R k ∈ SO(3) the rotation matrix from the sensor frame s to the local navigation frame n , g ∈ R 3 the constant gravity vector, p k ∈ R 3 and v k ∈ R 3 , respectively, the projectile position and velocity, and [.] × the skew matrix. This algorithm is generally used for Kalman filters for the prediction step to estimate trajectory, as presented in [2][3][4]8,24,40].

Impact of the Input Data Normalization and the Local Navigation Frame Rotation
This section reports the estimated trajectories of a finned projectile according to LSTM ALL , LSTM POS , LSTM VEL and LSTM ANG , each declined in the 8 versions V 1 -V 8 presented in Section 3.2. The training parameters are summarized in the Table 2. S eq len corresponds to 20 samples representing a window of 0.02 s as the sensor sampling period is ∆ t = 1 × 10 −3 s. This parameter is adjusted according to the input data used.  As shown in Figure 5, positions estimated by the LSTMs are significantly more accurate than the Dead-Reckoning. Nevertheless, LSTMs are only accurate in estimating the pitch and yaw angle. Errors in the roll angle estimation are due to the finned projectile rotation rate. The LSTMs fail to fully capture the roll angle dynamics. Moreover, the local navigation frame rotation improves projectile position estimation but slightly degrades pitch angle estimation.

Quantitative Evaluation: Analysis on the Whole Test Dataset
To validate the previous observations, LSTM ALL, POS, VEL, ANG, V 1−8 are evaluated on the N sim simulations in the test dataset according to two criteria based on the Root Mean , with x the estimate, x re f the reference and N the number of samples for one simulation.
The two evaluation criteria are as follows: • Success Rate C 1 : number of simulations where a LSTM RMSE is strictly smaller than the Dead-Reckoning. • Error Rate C 2 : percentage of LSTM error compared to Dead-Reckoning errors.
The position, velocity and orientation success rate C 1 and the error rates C 2 are presented in Figures 6-8. Figures 5-8 has been modified for better readability. Position analysis results (see Figure 6): The LSTMs outperform Dead-Reckoning for position estimation, especially along the y-axis. Normalizations affect position estimates differently as STD(T ), STD(M), STD(P ) (V 4 ) and MM(T , M, P ) (V 3 ) are not appropriate to this application. In addition, the normalization leads to less accuracy as it implies a loss of information. Finally, rotating the local navigation frame improves the accuracy according to C 2 along the three axis.
Veclocity analysis results (see Figure 7): As previously, the LSTMs clearly outperform Dead-Reckoning for velocity estimation. Specialized networks LSTM VEL are a bit better than LSTM ALL . The STD normalization for all features V 5 exhibits the best results among the different normalization options investigated, especially for velocity along the z-axis. Moreover, rotating the local navigation frame V 6 significantly improves the projectile velocity estimation along all the three axes. Euler angles analysis results (see Figure 8): The LSTMs deteriorate the roll φ angle estimation compared to the Dead-Reckoning, but accurately estimate the yaw angle ψ. As previously, the STD(T , M, P ) (V 5 ) normalization of LSTM ALL exhibits the best performances for the three Euler angles estimation as well as the local navigation frame rotation. In summary, the LSTM end-to-end estimation is particularly appropriate for projectile position and velocity estimation. Moreover, from this evaluation study, it can be concluded that specialized networks do not significantly improve the estimation accuracy and STD(T , M, P ) (V 5 ) normalization is more appropriate to estimate a projectile trajectory compared to other normalizations. Finally, rotating the local navigation frame is an efficient method to optimize projectile position and velocity estimation.

Impact of Inertial Measurement Type and Local Navigation Frame Rotation on Estimation Accuracy
The dataset presented in Section 3.1 contains two kinds of inertial readings; IMU measurement, used so far, and IMU DYN measurement, where sensors are characterized by a 2nd order model. This section focuses on the impact of inertial data and navigation frame rotation on LSTM estimation accuracy. To this end, four LSTMs are trained to estimate positions, velocities and orientations of a finned projectile: LSTM I MU, V 1 (no rotation) , LSTM I MU DYN, V 1 (no rotation), LSTM I MU, V 6 (navigation frame rotation) and LSTM I MU DYN, V 6 (navigation frame rotation). LSTM specifications are given in Table 3.

Impact of the Local Navigation Frame Rotation and IMU Measurement
This section focuses on LSTM I MU, V 1 (no rotation), LSTM I MU, V 6 (rotation) and the Dead-Reckoning algorithm (9) for position, velocity and Euler angles estimation. Network characteristics are presented in Table 3. Figure 9 presents the average error distributionsē and the corresponding standard deviations σ evaluated for positions and Euler angles, with the three estimation methods considered, such asē where x re f is the reference, x the estimate and N the number of samples in the considered simulation. According to Figure 9, the Dead-Reckoning mean error dispersion (red) is very large compared to those of LSTMs (green and blue). Thus, LSTMs are perfectly adapted to estimate finned projectile positions and velocities. Focusing on p z , v x and v z , the LSTM I MU, V 1 average errors are not centered on zero compared to LSTM I MU, V 6 (rotation). Thus, the local navigation frame rotation improves these estimates. As previously observed, the LSTMs fail to estimate the projectile roll angle even if the finned projectile rotation rate is low. Furthermore, the centering and dispersion of angle errors indicate that the LSTMs suffer to estimate the projectile orientation despite the yaw angle accuracy.   According to Figure 10, the LSTMs accurately estimate the projectile position and velocity over a large dataset, despite dynamic inertial data causing, in contrast, the Dead-Reckoning divergence. Furthermore, the error distribution centering analysis allows to conclude that the local navigation frame rotation improves the estimation of p y , v x and v y . As for previous orientation estimation results, both LSTMs fail to estimate the projectile roll angle φ.

Evaluation Metric
The performance of LSTM I MU V 1 , LSTM I MU V 6 , LSTM I MU DYN V 1 , LSTM I MU DYN V 6 , and the Dead-Reckoning (9) are evaluated using two evaluation criteria computed for each simulation in the test dataset: • Mean Absolute Error: The average of each criteria are evaluated on the dataset as: with χ the selected evaluation criterion and N sim the number of simulations in the test dataset. C MAE Criterion analysis: Figure 11 presents the MAE average C MAE , evaluated on the whole test dataset for LSTM I MU, V 1 , LSTM I MU, V 6 , LSTM I MU DYN, V 1 , LSTM I MU DYN, V 6 and the Dead-Reckoning.
The LSTMs accurately estimate position and velocity, both with IMU and IMU DYN measurement, with errors around a few meters for the positions. This criterion confirms that the local navigation frame rotation improves p z , v x and v z estimation for LSTMs trained with IMU measurement, and p y , v x and v y estimation for LSTMs trained with IMU DYN measurement. As expected, LSTMs are not adapted to estimate the projectile roll angle contrary to the pitch and the yaw angle.  According to Figure 12, an LSTM is an accurate approach to estimate projectile position and velocity in a GNSS-denied environment. However, a LSTM is not the optimal method for the orientation estimation. Furthermore, an LSTM is able to generalize the learned features over a large projectile fire dataset as well as learn and predict trajectories from different sensor models. Finally, C MAE and C SCORE analysis confirms that the local navigation frame rotation is an appropriate method to optimize p z , v x and v z for LSTMs trained with IMU measurement, and p y , v x , and v y for LSTMs trained with IMU DYN measurement.

Errors at Impact Point
This section focuses on the errors at the impact point, i.e., position errors (p x , p y ) at the final time of a shot. Figure 13a shows the impact point errors of LSTMs with IMU and IMU DYN measurements and the Dead-Reckoning. Figure 13b presents where the impact point errors are located in different error zones.
Whatever the inertial sensor model, The Dead-Reckoning impact point errors are greater than 100 m, contrary to LSTMs. Focusing on IMU measurements, the majority of LSTM I MU, V 6 (rotation) impact point errors are less than 5 m contrary to LSTM I MU, V 1 (no rotation) where errors are lower than 20 m. Thus, the local navigation frame rotation allows to strongly reduce errors at the impact point. Focusing on IMU DYN measurements, the local navigation frame rotation deteriorates estimation accuracy. Indeed, 266 simulations have impact point errors less than 5 m with LSTM I MU DYN, V 1 while 180 simulations have impact point errors less than 5 m with LSTM I MU DYN, V 6 . In summary, an LSTM is an accurate approach to estimate projectile position as errors are less than ten meters in a GNSS-denied environment. These results are comparable to those obtained by commercial GNSS-guided mortars. Moreover, the local navigation frame rotation is useful with IMU measurement and allows to minimize position errors.

Conclusions
This paper presents a deep learning approach to estimate a gun-fired projectile trajectory in a GNSS-denied environment. Long-Short-Term-Memories (LSTMs) are trained from the embedded IMU, the magnetic field reference, flight parameters specific to the projectile (the fin angle, the initial velocity, the barrel elevation angle) and a time vector. The impact of three preprocessing methods are analyzed: the input data normalizations, the local navigation frame rotation and the inertial sensor model. According to the reported results, the LSTMs accurately estimate projectile positions and velocities compared to a conventional navigation algorithm as errors are around ten meters, similar to the GNSSguided projectile accuracy. Nevertheless, LSTM suffers in the estimation of the projectile orientation, especially for the high dynamic roll angle. The input data normalization provides no interesting results while the local navigation frame rotation optimizes the position and velocity estimation. Moreover, the results prove that the LSTMs generalize the learned features on large datasets independently of the inertial sensor model considered. Based on results reported in this paper, the next step is to implement a deep Kalman filter by considering the LSTM predictions as observations.