Maneuvering Target Tracking Using Simultaneous Optimization and Feedback Learning Algorithm Based on Elman Neural Network

Tracking maneuvering targets is a challenging problem for sensors because of the unpredictability of the target’s motion. Unlike classical statistical modeling of target maneuvers, a simultaneous optimization and feedback learning algorithm for maneuvering target tracking based on the Elman neural network (ENN) is proposed in this paper. In the feedback strategy, a scale factor is learnt to adaptively tune the dynamic model’s error covariance matrix, and in the optimization strategy, a corrected component of the state vector is learnt to refine the final state estimation. These two strategies are integrated in an ENN-based unscented Kalman filter (UKF) model called ELM-UKF. This filter can be trained online by the filter residual, innovation and gain matrix of the UKF to simultaneously achieve maneuver feedback and an optimized estimation. Monte Carlo experiments on synthesized radar data showed that our algorithm had better performance on filtering precision compared with most maneuvering target tracking algorithms.


Introduction
Target tracking is a fundamental and critical task in many sensor-based practical applications including radar-based tracking [1], sonar-based tracking [2], wireless sensor networks [3], video surveillance [4], navigation [5], and mobile robotics [6]. Tracking maneuvering targets is a challenging task because sensor systems are inevitably inaccurate and they are unaware of the uncertain external forces that may be acting on targets, so the target's dynamic properties cannot be modeled exactly.
From the viewpoint of statistics, maneuvering targets are often modeled as jump Markov linear systems (JMLS) where the maneuver of the target is modeled as a finite-state Markov chain, and its continuously varying state evolves according to an underlying model that switches among a set of operating models controlled by a Markov chain at each sampling instance [7]. Usually, classical methods for maneuvering target tracking include two main components: (i) maneuver modeling, the stochastic process assumption for an unpredictable maneuver behavior; and (ii) maneuver compensation, the correction of target state estimates to allow for the maneuver.
For maneuvering target tracking, many algorithms have been developed, and these can be grouped into three types. The first one is based on maneuver dynamics modeling, which is based on the concept of motion-origin uncertainty, and most methods are based on the stochastic process assumption of an unknown acceleration component, such as the Singer model [8], Jerk model [8] and and represent the dynamic model error covariance matrix and the measurement error covariance matrix, respectively.
From equation (1), we can conclude in short, as follows The dynamic model bias ( ), also called as maneuver item, can be estimated arbitrarily using a function-approximation scheme that meets the criteria for the Stone-Weierstrauss theorem [11].
The conventional methods of dynamic model modeling for maneuvering target tracking mainly include random process assumptions, such as the Markov process and semi-Markov jump process, etc. [8]. One of the most popular models is the Singer model, which defines the ( , ) + component in equation (1) as a Markov process and assumes that the target acceleration ( ) is a zero-mean first-order stationary Markov process with an autocorrelation function of ( ) = | ( ) ( + )| = | | . Based on these, the dynamic equation of the Singer model would be parameterized by a maneuver factor [8]. Its variants, called the CS model is essentially a Singer model with an adaptive mean. Another extension of the Singer model is called the Jerk model, which is a derivative of acceleration and a zero-mean high-order stationary Markov process. The Singer model, and its variants, are in essence a kind of a priori model since they do not use online information about the target maneuver, so, they cannot be expected to be very effective for the diverse acceleration situations of actual target maneuvers.
The IMM algorithm has been widely used in maneuvering target tracking, and it uses the Markov transition probability to switch to multiple models, automatically adjusts the filter bandwidth, and can track arbitrary maneuvering of targets in theory [18]. The transition probability between different models is expressed as a Markov chain. Another approach to unknown covariance Based on the above, a radar system tracking model can be established [1, 8,9]: where f (·) is the target dynamic model, Γ(·) is the dynamic model bias caused by the target maneuver, which is a time-varying nonparametric and unknown component, ω k is the dynamic model random error with the Gaussian white noise distribution p(ω k ) ∼ N(0, Q k ), h(·) is a non-linear measurement function, and v k is the measurement random error with another Gaussian white noise distribution p(v k ) ∼ N(0, R k ). Q k and R k represent the dynamic model error covariance matrix and the measurement error covariance matrix, respectively. From Equation (1), we can conclude in short, as follows The dynamic model bias Γ(x k ), also called as maneuver item, can be estimated arbitrarily using a function-approximation scheme that meets the criteria for the Stone-Weierstrauss theorem [11].
The conventional methods of dynamic model modeling for maneuvering target tracking mainly include random process assumptions, such as the Markov process and semi-Markov jump process, etc. [8]. One of the most popular models is the Singer model, which defines the Γ(X k−1 , t k ) + ω k component in Equation (1) as a Markov process and assumes that the target acceleration a(t) is a zero-mean first-order stationary Markov process with an autocorrelation function of R a (τ) = E|a(t)a(t + τ)| = σ 2 e −a|τ| . Based on these, the dynamic equation of the Singer model would be parameterized by a maneuver factor α [8]. Its variants, called the CS model is essentially a Singer model with an adaptive mean. Another extension of the Singer model is called the Jerk model, which is a derivative of acceleration and a zero-mean high-order stationary Markov process. The Singer model, and its variants, are in essence a kind of a priori model since they do not use online information about the target maneuver, so, they cannot be expected to be very effective for the diverse acceleration situations of actual target maneuvers. The IMM algorithm has been widely used in maneuvering target tracking, and it uses the Markov transition probability to switch to multiple models, automatically adjusts the filter bandwidth, and can track arbitrary maneuvering of targets in theory [18]. The transition probability between different models is expressed as a Markov chain. Another approach to unknown covariance matrices and transition models is to rely on multiple standard models and let the filter choose the most adequate one in each time frame, which is effectively modeling it as Jump Markov system [7,8]. Based on the multi-model interactive theory, some models like constant acceleration (CA), constant velocity (CV), constant turning (CT) would be usually used as basic models in an IMM algorithm, and recursive filter algorithms like the Kalman filter (KF), EKF, UKF, etc. are usually applied to iteratively filter for each model, and then the transition probability is used to calculate the current optimal estimate and get the updated covariance matrix. However, the shortcomings of the IMM mean it needs more models and more calculation, moreover, limited models make it difficult to ensure optimal performance.
Learning-based methods to approximate the target's unknown maneuver are currently popular. In mathematical theory, the universal approximation theorem states that artificial neural network or support vector regression can approximate any continuous functions. For example, the MLP is tightly coupled with an EKF to form a NEKF filter [11], which learns a function estimate of the biases within the error covariance of the sensor error, and the NEKF can be used to calibrate sensors for the problems of registration and target tracking. Support vector regression (SVR), as another technique for linear and nonlinear function approximation, and is often used for time series prediction [18]. So, an SVR-based Kalman filter for maneuvering target tracking [19,20] also works well. A standard Kalman filter with a self-constructing neural fuzzy inference network (SONFIN) algorithm [15] can be trained to detect when the maneuver occurred, the magnitude of maneuver values, etc. The Elman neural network is also introduced into IMM filtering to form the optimized IMM-ELM algorithm [14]. The Elman neural network is a kind of recurrent neural network with some local memory nodes, which can approximate any time-varying nonlinear function, such as Γ(X k−1 , t k ) in Equation (1). Also, the ENN does not need large training data sets for supervised learning, and the online supervised learning mode enables the neural network to meet the basic requirements of learning.
The Elman neural network is a typical type of dynamic recurrent neural network (RNN) proposed by Elman in 1990 [25,26]. As shown in Figure 2, the ENN consists of the undertake layer, input layer, hidden layer, and output layer. The undertake layer is used to feedback the output of the hidden layer, along with the signal provided by the input unit at the next time, as the input of the hidden layer unit at the next moment. It can also be considered as a delay operator, so that the network has the function of dynamic memory. Its memory and feedback characteristics make it applicable to adaptive short-term forecasting systems [27][28][29]. In the ENN, w 1 represents the weight from the context layer to the hidden layer, w 2 represents the weight from the input layer to the hidden layer, and w 3 represents the weight from the hidden layer to the output layer, u(k − 1) represents the network input vector at the (t − 1)th iteration, and y(k) refers to the network output vector at the (t)th iteration. The undertake layer has retained the hidden layer output vector at the previous iteration; that is to say, x c (t) represents the context layer output vector at the (t)th iteration, and its value equals the hidden layer output vector at the (t − 1)th iteration. The thresholds of the hidden layer unit and output layer are θ j and θ k . hidden layer, and represents the weight from the hidden layer to the output layer, ( − 1) represents the network input vector at the ( − 1)th iteration, and ( ) refers to the network output vector at the ( )th iteration. The undertake layer has retained the hidden layer output vector at the previous iteration; that is to say, ( ) represents the context layer output vector at the ( )th iteration, and its value equals the hidden layer output vector at the ( − 1)th iteration. The thresholds of the hidden layer unit and output layer are and .  According to the above configuration, the mathematical model for ENN [25,26] is defined as: where 0 ≤ α < 1 is the self-connected feedback gain factor. λ(x) usually uses the Sigmoid activation function: while g(x) is a linear function. In this way, the ENN can approximate any time-varying function with limited discontinuous points at any precision, as long as the neurons in the hidden layer are enough [30]. Like most feed-forward neural networks, the ENN also uses the back-propagation (BP) algorithm [31] to train the weights and thresholds. Let the actual output of the system be y d (k) in step k, and its cost function is defined as:

Simultaneous Optimization and Feedback Online Learning Scheme
In the classical UKF filter [32][33][34], the dynamic model error covariance matrix Q k and the measurement error covariance matrix R k are always assumed to be time-invariant, but this is not practical in many cases. It is assumed that Ψ(·) is an unscented transform (UT) in matrix form [9], then, according to Equation (1), UKF's prediction process can be defined as: and its update process can be defined as: where P k−1 is a covariance matrix of state X k−1 ,X k andP k area predicted state and its corresponding covariance matrix, ∼ X k and ∼ P k are the updated state and its corresponding covariance matrix after getting a new measurement Z k , and Θ is the parameter set for unscented transform.
In many references, the recursive filter's residual [10], innovation [35] and gain matrix [36] can be used to model target maneuvers, and optimized estimation [23,37,38] is also used to track the maneuvering targets. According to the statistical model [8] of maneuvering target tracking, the scale parameter of Q k is a practical parameter to model the maneuver. In our tracking practice analysis, we find that if the scale of the covariance matrix Q k is smaller, then the filtering is more accurate, but when target maneuvering, the scale of Q k should be large enough to ensure that the filter is not be divergent. So, in our update process in Equation (9), some information on the filter's residual X k|k−1 − X k|k , innovation Z k − h(X k|k−1 ) and gain matrix K k are integrated in one neural network to approximate the dynamic model bias to enhance the filter's performance to adjust the scalar parameter of covariance matrix Q k , and to correct the estimated state ∼ X k . Based on the above analysis, a simultaneous optimization and feedback online learning scheme on a UKF filter is proposed to learn the dynamic model bias. Specifically, two main factors are tightly coupled in this filter and function simultaneously. One is the feedback strategy, where we use a scale factor q k to adaptively tune the dynamic model error covariance matrix Q and another is the optimization strategy, where we use a corrected component ∆X k to refine the final state estimation adaptively.
where q k is the scalar parameter of covariance matrix Q and ∆X k is the corrected vector of the estimated state ∼ X k . Specifically, an ENN-based learning scheme for the UKF filter [39] was designed in this paper, as seen in Figure 3. In the framework, we define the feedback strategy for the dynamic process error covariance matrix as Q k : Q k+1 = q k * Q k , and we define the optimization strategy for the estimation as ∼ X k|k : X k|k = ∆X k|k + ∼ X k|k , where q k is the scale factor of Q k , and ∆X k|k is the correction component of ∼ X k|k . Ideally, ∆X k|k = X true k − ∼ X k|k , and X true k is the real position of the target at time k. However, in the practical application, the real location of uncooperative targets is unknown, and the actual dynamic process error covariance Q k is unknown also. We have designed an approximate method; the feedback parameters and corrected state can be learnt online by using a sliding-window scheme. The training of the network is an online supervised training process and the training samples are collected by a sliding window with length N (for example, sample size N = 20). In the initial phase, the ENN is trained by the ground truth, which is available approximately through generating samples like these: ∆X k|k = Z k − ∼ X k|k and q k = k/N. In the subsequent process of recursive learning and filtering, ∆X k|k and q k can be continuously updated by the network as follows, The learning scheme of recursive filter based on ENN is shown in Figure 3. phase, the ENN is trained by the ground truth, which is available approximately through generating samples like these: ∆ | = − | and = / . In the subsequent process of recursive learning and filtering, ∆ | and can be continuously updated by the network as follows, The learning scheme of recursive filter based on ENN is shown in Figure 3.

An Elman Neural Network Embedded UKF Filter
Specifically, for a radar tracking a maneuverable target, the 9-dimensional state vector | = [ , , , , , , , , ] is defined, and the 3-dimensional measurement vector is In the UKF filter, the CA motion model is used to define the dynamic equation of the target, so the dynamic equation (•) is defined as while the observation function is coordinate transformation. Initialization of the state covariance matrix and process noise covariance matrix are:

An Elman Neural Network Embedded UKF Filter
Specifically, for a radar tracking a maneuverable target, the 9-dimensional state vector X k|k = In the UKF filter, the CA motion model is used to define the dynamic equation of the target, so the dynamic equation f (·) is defined as while the observation function is coordinate transformation. Initialization of the state covariance matrix P and process noise covariance matrix Q are: The data flow of the detailed filtering algorithm of the proposed Elman Neural Network based UKF Filter (ELM-UKF) is shown in Figure 4. Besides the prediction and update process of classical UKF, there is an ENN-based augmented process for feedback parameter and optimization estimation learning. The data flow of the detailed filtering algorithm of the proposed Elman Neural Network based UKF Filter (ELM-UKF) is shown in Figure 4. Besides the prediction and update process of classical UKF, there is an ENN-based augmented process for feedback parameter and optimization estimation learning.  Figure 4. ELM-UKF filtering algorithm flow.
According to Figure 4, the ELM-UKF filtering algorithm can be described in detail as Algorithm 1: 2. State prediction for Sigma point sets.
3. The state prediction mean and covariance matrix is calculated by the weighted Sigma point.
4. A new Sigma point set is obtained by performing UT transformation on the state prediction.
6. The observation prediction, observation covariance and the innovation covariance matrix are calculated by the weighted predictive value.
8. Update the current state and state covariance matrix.  9. Calculate the residual, innovation and gain matrix, and they are used as the input vector. When training, the neural function ENN (·) is trained and supervised b Algorithm 2 and the network parameter Π can be learnt when Algorithm 2 is converged. When filtering, the corrected estimate item and the scale factor of Q matrix can be obtained by the forward steps of ENN.
10. Update the process noise covariance matrix and get the corrected estimate.
Before training, historical tested data based on a standard UKF filter are constructed as training samples, the input vector u(k) = (X k|k−1 − ∼ X k|k , Z k − h X k|k−1 , K k ) and the output vectorŷ(k) = ∆X k|k , q k are collected. Then, the main part of the Elman neural network training algorithm is described as Algorithm 2:

Algorithm 2: ENN Training Algorithm
1. Network parameter initialization: create a case of an Elman neural network with 9 input units, 18 hidden units and 4 output units. Initialize the weights and learning rate, w 1 , w 2 , w 3 , l.
2. Get the input vector u(k) = (X k|k−1 − ∼ X k|k , Z k − h X k|k−1 , K k ) and the output vector y(k) = ∆X k|k , q k for training.
3. The input layer data and the undertake layer data are weighted, and added as the input of the hidden layer.x (k) = w 1 x c (k) + w 2 u(k) 4. Use the hidden layer's Sigmoid activation function to obtain the output of the hidden layer as the input of the output and undertake layer.
5. The output layer is the weighted linear combination of the input, and obtains the output value.
6. Calculate the error based on following loss function. If the loss is less than a threshold ε (ε > 0), then end the training.
7. Calculate the weight update according to the BP algorithm.

Experiments and Discussion
Simulation experiments were designed for a variety of different maneuvering forms, such as double S-shaped, single S-shaped and helical trajectories; different random errors were also superimposed on each maneuvering form. In our experiments, the simulated target flight trajectories are based on the combination of five basic motion equations, such as line, parabola, serpentine and hyperbola and a joint between two consequent segments, at each different trajectory segment or joint, where the acceleration is variable or with a different configuration. We used a trajectory edit toolkit to generate different flights trajectories with or without maneuver. According to the design of an uncooperative target tracking, there is inevitable, unknown and time-varying dynamic model bias between the real trajectory and the filter's motion equations, which is the maneuver item.
After generating the trajectories, then the position of the target was transformed to a radar-centered reference system and was superimposed with specific random distribution error to get the measurement series. In most physical radar systems, the random component of measurement error after systematic calibration can be approximated to be a Gaussian distribution.
The 1000 Monte Carlo experiments were carried out to compare the performance of classical and the state-of-the-art models like the Singer [8], a CA model and CV model combined, simplified IMM-KF [9], IMM-ELM [14], SVR-UKF [19,20], etc., and our proposed ELM-UKF. In each group of comparison experiments, the motion equations and initial parameters of these covariance matrixes P, Q and R were set with the same parameters and the CA model was selected in these methods. By qualitative and quantitative analyses of the convergence and filtering precision of the above five models, it was found that the online learning model based on the Elman neural network is superior to the state-of-the-art methods.

Experiments on Algorithm Convergence
In the experimental scenario, the target is flying at the initial speed of v = 200 m/s and the constant acceleration of a =2 m/s 2 in a single S-shaped trajectory, and the maximum maneuvering acceleration is a max =64 m/s 2 , which is a general maneuvering scenario. The Gaussian white noise of measurement error with range component ∆R = 300 m, azimuth component ∆A =0.5 • and elevation component ∆E = 0.5 • is added to the trajectory in the scenario, as in Figure 5a. Figure 5a also shows the superimposed trajectories and their error distribution for the different filtering models, which are the Singer, IMM-KF, IMM-ELM, and SVR-UKF models and our proposed ELM-UKF model in a maneuver scenario. The filtering estimation error distributions for the different methods are shown in Figure 5b-d according to range, azimuth and elevation, respectively.
Additionally, the statistical results of classical and the state-of-the-art models are listed in Table 1.
In Table 1, the metric is calculated as Mean_X = methods are shown in Figure 5b-d according to range, azimuth and elevation, respectively. Additionally, the statistical results of classical and the state-of-the-art models are listed in Table  1. In Table 1, the metric is calculated as Mean_X = ∑ ( − )/ , where is the filter estimation, is the ground truth, and can be range (R), azimuth (A) or elevation (E) separately, and is the total number of measurements of this scenario. RMSE_X = ∑ − Mean_X /( − 1) / , which is the root mean of square error (RMSE) of the range and azimuth and elevation.    Based on qualitative and quantitative analysis of the experiment results of maneuvering target tracking, it was found that these four filtering models can achieve good convergence. Generally, they converge within 20~50 measurements from initial filtering using classical methods. Moreover, the online learning method based on the Elman neural network has the fastest convergence performance among the five models from the range error distribution, and it is convergent within 10 measurements.

Experiments on Different Maneuvering Forms
In scenario 1, the target is flying at a constant speed of v = 200 m/s in a double S-shaped trajectory, and the maximum maneuvering acceleration is a max =16 m/s 2 , which is a weak maneuvering form. In scenario 2, the target is flying at the initial speed of v = 200 m/s and the constant acceleration of a =2 m/s 2 in a single S-shaped trajectory, and the maximum maneuvering acceleration is a max =64 m/s 2 , which is a general maneuvering form. In scenario 3, the target is flying at a constant speed of v = 300 m/s in a helical trajectory, and the maximum maneuvering acceleration is a max =90 m/s 2 , which is a strong maneuvering form. The Gaussian white noise of measurement error with ∆R = 300 m, ∆A = 0.5 • , ∆E = 0.5 • is added to the trajectories in the above three scenarios. Figure 6a-c are three kinds of maneuver scenarios which are superimposed by different filtered trajectories, for example, the Singer, IMM-KF, IMM-ELM, and SVR-UKF models and our proposed model. The distribution of mean and RMSE of 1000 Monte Carlo simulations for the three maneuver scenarios are shown in Figure 7a-c. The statistical results of filtering precision are shown in Table 2. In Figure 7 and Table 2, the position estimationp means the target's state estimation weighted distance in a Cartesian coordinate system, which is defined as follows:p = x 2 +ŷ 2 +ẑ 2 , and the position error is the difference between the position estimationp and the position of ground truth, and the error mean and RMSE are the corresponding statistical mean and root mean square error. scenario 1; (b) Superimposed trajectory of general maneuvering form as scenario 2; (c) Superimposed trajectory of strong maneuvering form as scenario 3.
In Figure 7 and Table 2, the position estimation p means the target's state estimation weighted distance in a Cartesian coordinate system, which is defined as follows: p = + + ̂ , and the position error is the difference between the position estimation p and the position of ground truth, and the error mean and RMSE are the corresponding statistical mean and root mean square error.     Figure 7, it can be seen that all five methods, including our proposed algorithm can effectively adapt to the above three maneuvering scenarios and it can be concluded that when the target is moving in maneuvers, the filter precision of the traditional Singer model decreases. Compared with the Singer model, the IMM-KF model, SVR-UKF model and IMM-ELM model have stronger immunity to the target's maneuverability, and the filtering precision was improved a little. Either from the statistical mean or RMSE of Monte Carlo experiments, it can be seen that our proposed ELM-UKF filter has the best filtering precision among the five models. We used the ratio of the statistical mean of the filtered error to measurement error to quantitatively evaluate the performance. From Table 2, it can be concluded that our proposed model further reduced the ratio by 16.35% on average and up to 24.39%, and the ratio of RMSE of our proposed filter is further reduced 20.26% on average and up to 27.67% compared with other four models.

Experiments on Different Levels of Measurement Error
In order to assess the influence of different levels of sensor measurement error on filtering when the target is moving in a maneuvering form, three levels of Gaussian white noise measurement error were superimposed on the real trajectory. For instance, the largest measurement error was set as ∆R = 500 m, ∆A = 1.0  Table 3.
effectively adapt to the above three maneuvering scenarios and it can be concluded that when the target is moving in maneuvers, the filter precision of the traditional Singer model decreases. Compared with the Singer model, the IMM-KF model, SVR-UKF model and IMM-ELM model have stronger immunity to the target's maneuverability, and the filtering precision was improved a little. Either from the statistical mean or RMSE of Monte Carlo experiments, it can be seen that our proposed ELM-UKF filter has the best filtering precision among the five models. We used the ratio of the statistical mean of the filtered error to measurement error to quantitatively evaluate the performance. From Table 2, it can be concluded that our proposed model further reduced the ratio by 16.35% on average and up to 24.39%, and the ratio of RMSE of our proposed filter is further reduced 20.26% on average and up to 27.67% compared with other four models.

Experiments on Different Levels of Measurement Error
In order to assess the influence of different levels of sensor measurement error on filtering when the target is moving in a maneuvering form, three levels of Gaussian white noise measurement error were superimposed on the real trajectory. For instance, the largest measurement error was set as ∆ = 500 , ∆ = 1.0°, ∆ = 1.0°, and the middle level was set as ∆ = 300 , ∆ = 0.5°, ∆ = 0.5°and the smallest level was set as ∆ = 50 , ∆ = 0.2°, ∆ = 0.2°. After adding different-level measurement errors, the filtered trajectories of the Singer, IMM-KF, IMM-ELM, and SVR-UKF models and our proposed ELM-UKF model were superimposed as shown in Figure 8a Table 3.   From the Monte Carlo experiments, it was found that our proposed model can effectively adapt to different levels of measurement error. Also, it could be concluded that the filtering precision of our proposed method is superior to the other four methods, including the Singer model, IMM-KF model, SVR-UKF model and IMM-ELM model under different levels of measurement error, and that the precision improvement ratio was more than 20%.
Moreover, compared with the other four models, the ratio of the statistical mean of the filtered error of our algorithm was further reduced by 18.19% on average and up to 24.83%, and the corresponding statistical RMSE of the filtered error was further reduced by 20.16% on average and up to 27.91% above the noise levels of the middle random error. Even on the lower noise level, the  From the Monte Carlo experiments, it was found that our proposed model can effectively adapt to different levels of measurement error. Also, it could be concluded that the filtering precision of our proposed method is superior to the other four methods, including the Singer model, IMM-KF model, SVR-UKF model and IMM-ELM model under different levels of measurement error, and that the precision improvement ratio was more than 20%.
Moreover, compared with the other four models, the ratio of the statistical mean of the filtered error of our algorithm was further reduced by 18.19% on average and up to 24.83%, and the corresponding statistical RMSE of the filtered error was further reduced by 20.16% on average and up to 27.91% above the noise levels of the middle random error. Even on the lower noise level, the ratio of the statistical mean and statistical RMSE of our proposed filter was further reduced by an average of 12.05% and 12.64%, respectively.

Conclusions
In this paper, a novel method for maneuvering target tracking learnt by the Elman neural network is proposed. It is based on a simultaneous optimization and feedback learning scheme to optimize the filtering performance. The ENN algorithm is embedded into a classical UKF to learn to track maneuvering targets. The ENN is trained online by the UKF filter's residual, innovation and gain matrix as input and it can learn to tune a scale factor of the dynamic model error covariance matrix and to correct the final state estimation simultaneously, even if the sensor is tracking an uncooperative maneuvering target. It can be concluded after numerous of Monte Carlo experiments, that our proposed method is superior to the state-of-the-art methods, like the Singer model, IMM-KF model, SVR-UKF model and IMM-ELM model on different maneuvering forms and different measurement errors.