# Track Prediction for HF Radar Vessels Submerged in Strong Clutter Based on MSCNN Fusion with GRU-AM and AR Model

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

- We build a dual-input, multi-scale feature-extraction network. This network processes trajectory information into a three-dimensional time series and then divides it into two sub-series with parallel inputs. Multiple convolution kernels of different sizes are designed for the input data to extract features at different scales. In this way, the obtained features are more diversified than those from a single convolutional kernel.
- We jointly design a stacked GRU algorithm, which is more efficient than alternatives, and AM, which can help the model obtain more key information, for the track prediction of HF radar targets.
- We combine MSCNN and GRU-AM with AR linear models in parallel to achieve the combination of linear and nonlinear models. The sum of the two results can be obtained to improve trajectory prediction accuracy.
- For broken trajectories, both pre- and post-break trajectory data are considered as training data in order to make full use of historical information. We use the pre-break information to get the prediction result 1, and the post-break track information to get the prediction result 2. The bidirectional prediction results are assigned corresponding weights by the entropy method, and the two are weighed to obtain the final predicted trajectory to further improve the accuracy compared to the one-way prediction.

## 2. Methodology

#### 2.1. Multi-Scale Feature Fusion

#### 2.2. GRU Model

#### 2.3. Attention Mechanism (AM)

#### 2.4. AR Model

#### 2.5. Entropy Method

#### 2.6. Overall Model

## 3. Experiments and Analysis

#### 3.1. Data Processing

#### 3.2. Parameters Setting

_{step}), i.e., the data from the first t

_{step}moments used to forecast the data for the present moment, directly affects the accuracy of the estimation. To choose the appropriate value, the result errors are compared by setting different time steps (step = 3, 4, 5, 6, 7, 8, 9, 10). As Figure 9a shows, the error gradually decreases as the time step grows, while the error starts to rise after a time step of 8. The reason is that as the time step increases, more information can be used, leading to a more accurate prediction model. After the time step goes beyond a certain amount, however, the overly long series has less correlation with the current time, resulting in error accumulation and increase. The mean absolute error (MAE) and root mean square error (RMSE) are at a minimum when t

_{step}= 8, so the time step is set at 8.

#### 3.3. Implementation Details

#### 3.4. Experiment Results Analysis

#### 3.4.1. Experiment Results

#### 3.4.2. Comparison Experiments

- EKF: The system noise variance matrix $\mathit{Q}=\mathrm{diag}({10}^{-4},{10}^{-6},{10}^{-4},{10}^{-6})$ and the observation noise variance $\mathit{R}=\mathrm{diag}({10}^{4},1,{10}^{4},1)$, where $\mathrm{diag}$ is a diagonal matrix.
- Seq2Seq: The encoder has three layers of GRU, and the decoder has one layer of GRU. Each GRU layer has 128 neurons.
- LSTM: Three stacked layers are used in LSTM with 128 neurons in each layer, and $\mathrm{Relu}$ is used as the activation function.
- Bi-LSTM: Three stacked layers are used in LSTM with 64 neurons in each layer.

#### 3.4.3. Ablation Experiments

- The complete model has the least RMSE and MAE, indicating that all components contribute to the effectiveness and robustness of the overall model.
- The performance of the model without AR and AM drops significantly, indicating that the AR and AM models play a crucial role. The main reason is that the AM model can enhance the results, and the AR network can effectively predict the linear information in the trajectory. In addition, it was found that the network performance decreases the most after the AR model is removed, showing that this model has a significant impact on the results. The reason is that the time series signal can be modeled by AR [30] and AR is generally robust to the scale changing of the data [31].
- There is also a performance loss when multiscale convolution is removed from the model, but the loss is less than when the AM and AR models are removed. This is due to the strong learning ability of stacked GRUs, together with the screening ability of attention and the joint effect of AR. Even if the multi-scale convolution module is removed, causing a performance loss, some of the missing features can still be obtained by other components of the model.

#### 3.5. Comparison with RD Spectrum

## 4. Discussion

#### 4.1. Course Maneuvers

#### 4.2. Speed Maneuvers

## 5. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Ponsford, A.M.; Wang, J. A review of High Frequency Surface Wave Radar for detection and tracking of ships. Turk. J. Electr. Eng. Comput. Sci.
**2010**, 18, 409–428. [Google Scholar] [CrossRef] - Sun, W.; Ji, M.; Huang, W.; Ji, Y.; Dai, Y. Vessel tracking using bistatic compact HFSWR. Remote Sens.
**2020**, 12, 1266. [Google Scholar] [CrossRef] [Green Version] - Zhang, L.; Mao, D.; Niu, J.; Jonathan Wu, Q.M.; Ji, Y. Continuous tracking of targets for stereoscopic HFSWR based on IMM filtering combined with ELM. Remote Sens.
**2020**, 12, 272. [Google Scholar] [CrossRef] [Green Version] - Rigatos, G.G. Sensor fusion-based dynamic positioning of ships using Extended Kalman and Particle Filtering. Robotica
**2013**, 31, 389–403. [Google Scholar] [CrossRef] - Perera, L.P.; Oliveira, P.; Soares, C.G. Maritime traffic monitoring based on vessel detection, tracking, state estimation, and trajectory prediction. IEEE Trans. Intell. Transp. Syst.
**2012**, 13, 1188–1200. [Google Scholar] [CrossRef] - Vivone, G.; Braca, P.; Horstmann, J. Knowledge-Based Multitarget Ship Tracking for HF Surface Wave Radar Systems. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 3931–3949. [Google Scholar] [CrossRef] [Green Version] - Osborne, R.W.; Blair, W.D. Update to the hybrid conditional averaging performance prediction of the IMM algorithm. IEEE Trans. Aerosp. Electron. Syst.
**2011**, 47, 2967–2974. [Google Scholar] [CrossRef] - Zhou, H.; Chen, Y.; Zhang, S. Ship Trajectory Prediction Based on BP Neural Network. J. Artif. Intell.
**2019**, 1, 29–36. [Google Scholar] [CrossRef] - Liu, J.; Shi, G.; Zhu, K. Vessel trajectory prediction model based on ais sensor data and adaptive chaos differential evolution support vector regression (ACDE-SVR). Appl. Sci.
**2019**, 9, 2983. [Google Scholar] [CrossRef] [Green Version] - De Masi, G.; Gaggiotti, F.; Bruschi, R.; Venturi, M. Ship motion prediction by radial basis neural networks. In Proceedings of the 2011 IEEE Workshop On Hybrid Intelligent Models And Applications, Paris, France, 11–15 April 2011; pp. 28–32. [Google Scholar] [CrossRef]
- Pascanu, R.; Mikolov, T.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning. ICML 2013, Atlanta, GA, USA, 16–21 June 2013; pp. 2347–2355. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the EMNLP 2014—2014 Conference on Empirical Methods in Nature Language Processing Proceeding of the Conference, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
- Yang, S.; Yu, X.; Zhou, Y. LSTM and GRU Neural Network Performance Comparison Study: Taking Yelp Review Dataset as an Example. In Proceedings of the 2020 International Workshop on Electronic Communication and Artificial Intelligence. IWECAI 2020, Qing Dao, China, 12–14 June 2020; pp. 98–101. [Google Scholar] [CrossRef]
- del Águila Ferrandis, J.; Triantafyllou, M.; Chryssostomidis, C.; Karniadakis, G. Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states. arXiv
**2019**, arXiv:1912.13382. [Google Scholar] [CrossRef] - Yuan, Z.; Liu, J.; Liu, Y.; Zhang, Q.; Liu, R.W. A multi-task analysis and modelling paradigm using LSTM for multi-source monitoring data of inland vessels. Ocean Eng.
**2020**, 213, 107604. [Google Scholar] [CrossRef] - Gao, M.; Shi, G.; Li, S. Online prediction of ship behavior with automatic identification system sensor data using bidirectional long short-term memory recurrent neural network. Sensors (Switzerland)
**2018**, 18, 4211. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Zhang, G.; Tan, F.; Wu, Y. Ship Motion Attitude Prediction Based on an Adaptive Dynamic Particle Swarm Optimization Algorithm and Bidirectional LSTM Neural Network. IEEE Access
**2020**, 8, 90087–90098. [Google Scholar] [CrossRef] - Agarap, A.F.M. A neural network architecture combining gated recurrent unit (GRU) and support vector machine (SVM) for intrusion detection in network traffic data. ACM Int. Conf. Proceeding Ser.
**2018**, 26–30. [Google Scholar] [CrossRef] [Green Version] - Suo, Y.; Chen, W.; Claramunt, C.; Yang, S. A ship trajectory prediction framework based on a recurrent neural network. Sensors (Switzerland)
**2020**, 20, 5133. [Google Scholar] [CrossRef] - Forti, N.; Millefiori, L.M.; Braca, P.; Willett, P. Prediction oof Vessel Trajectories from AIS Data Via Sequence-To-Sequence Recurrent Neural Networks. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 8936–8940. [Google Scholar] [CrossRef]
- Luong, M.T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. In Proceedings of the Conference Proceedings-EMNLP 2015: Conference Empirical Methods in Nature Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1412–1421. [Google Scholar] [CrossRef] [Green Version]
- Cheng, X.; Li, G.; Ellefsen, A.L.; Chen, S.; Hildre, H.P.; Zhang, H. A Novel Densely Connected Convolutional Neural Network for Sea-State Estimation Using Ship Motion Data. IEEE Trans. Instrum. Meas.
**2020**, 69, 5984–5993. [Google Scholar] [CrossRef] - Raghu, J.; Srihari, P.; Tharmarasa, R.; Kirubarajan, T. Comprehensive Track Segment Association for Improved Track Continuity. IEEE Trans. Aerosp. Electron. Syst.
**2018**, 54, 2463–2480. [Google Scholar] [CrossRef] - Meliboev, A.; Alikhanov, J.; Kim, W. 1D CNN based network intrusion detection with normalization on imbalanced data. arXiv
**2020**, arXiv:2003.00476. [Google Scholar] - Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell.
**2017**, 39, 640–651. [Google Scholar] [CrossRef] - Abedinia, O.; Amjady, N.; Zareipour, H. A New Feature Selection Technique for Load and Price Forecast of Electrical Power Systems. IEEE Trans. Power Syst.
**2017**, 32, 62–74. [Google Scholar] [CrossRef] - Javeri, I.Y.; Toutiaee, M.; Arpinar, I.B.; Miller, T.W.; Miller, J.A. Improving Neural Networks for Time Series Forecasting using Data Augmentation and AutoML. arXiv
**2021**, arXiv:2103.01992. [Google Scholar] - Moon, J.; Hossain, M.B.; Chon, K.H. AR and ARMA model order selection for time-series modeling with ImageNet classification. Signal Processing
**2021**, 183, 108026. [Google Scholar] [CrossRef] - Lai, G.; Chang, W.C.; Yang, Y.; Liu, H. Modeling long- and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference Research and Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar] [CrossRef] [Green Version]
- Yang, K.; Zhang, L.; Niu, J.; Ji, Y.; Jonathan Wu, Q.M. Analysis and Estimation of Shipborne HFSWR Target Parameters Under the Influence of Platform Motion. IEEE Trans. Geosci. Remote Sens.
**2020**, 1–14. [Google Scholar] [CrossRef] - Redoutey, M.; Scotti, E.; Jensen, C.; Ray, C.; Claramunt, C. Efficient vessel tracking with accuracy guarantees. In Proceedings of the International Symposium on Web and Wireless Geographical Information Systems, Shanghai, China, 11–12 December 2008; 2008; pp. 140–151. [Google Scholar] [CrossRef]

**Figure 1.**Convolution kernel sliding over time. The red and blue convolutions represent two different window lengths of 1D CNN, and the feature maps are obtained by sliding over the vessel trajectory.

**Figure 2.**Multi-scale feature extraction and fusion of tracks. The vessel tracks are pre-processed and input to the CNN with different window lengths to obtain the multi-scale feature map, and we subsequently fuse the feature maps in dimension.

**Figure 3.**Schematic diagram of the GRU neural network structure, which describes the detailed information of the transfer process.

**Figure 4.**Schematic diagram of the attention mechanism structure. The attention mechanism assigns different weights to the D-dimensional feature vectors.

**Figure 6.**Track fusion of the proposed model. The forward prediction is obtained by using the pre-break trajectory, the reverse prediction is obtained by using the post-break trajectory, and the final predicted trajectory is obtained by inverting the reverse prediction and weighting the two results by the entropy method.

**Figure 7.**Broken tracks of three vessels detected by HF radar. The red dot represents shore-based HF radar. The black corner sectors indicate the detection area of the HF radar. The blue lines indicate the moving trajectories of the three vessels, labeled 1, 2 and 3. The black oval is the mark of the fracture.

**Figure 8.**Schematic diagram of track training over time, with blue representing the input training data and green representing the output corresponding to each batch of input. With the change of the training time, an epoch is finished when all the training sets are used.

**Figure 9.**(

**a**) Error variation with time step. (

**b**) Error variation with the number of hidden layer neurons. The blue line represents RMSE variation, and the purple line represents MAE variation.

**Figure 10.**Loss variation curve with the number of iterations. (

**a**) represents forward training and (

**b**) represents reverse training. The blue line is the training set and the yellow line is the validation set.

**Figure 11.**Training process of the proposed network. The original ship trajectory data are input to the prediction model after data segmentation and normalization. The model is trained through iterations to output the prediction results, and the results are back-normalized to obtain the predicted trajectories.

**Figure 12.**(

**a**–

**c**) represent the 3 broken tracks, respectively. The diagrams on the left show the predicted results of the three fractured trajectories. The blue, cyan, and green lines represent, respectively, the trace before the break, the trace after the break, and the predicted track at the fracture. The graphs on the right show the comparison between the predicted and real values at the fracture.

**Figure 13.**The effect of the compared methods is shown. The prediction results of the proposed method and other mainstream algorithms including EKF, Seq2Seq, LSTM and Bi-LSTM for the three trajectories are shown in (

**a**–

**c**), respectively.

**Figure 14.**Ablation experiments with RMSE in (

**a**) and MAE in (

**b**). The values of line 1 and line 3 are 10

^{−3}, and line 2 is 10

^{−2}.

**Figure 15.**The black ellipses are clutter (sea clutter on both sides and ground clutter in the middle), the blue circles are the vessels detected by AIS, and the black box is the target vessel. (

**a**) The target vessel is sailing toward the sea clutter zone. (

**b**) The target vessel is covered by the clutter region. (

**c**) The target vessel is leaving the sea clutter zone. (

**d**). Schematic diagram of the target vessel entering and leaving the sea clutter.

**Figure 16.**(

**a**) The real target trajectory verification. The blue, cyan, and green lines represent, respectively, the track before the break, the track after the break, and the predicted track at the break. (

**b**) Comparison of the forecast and the real AIS data. (

**c**) Comparison of the broken track prediction with the other mainstream algorithms including EKF, Seq2Seq, LSTM and Bi-LSTM.

**Figure 17.**The part of the trajectory broken during the course maneuvers. The blue and brown lines show the real and predicted tracks, respectively. The box in the lower left compares the real and predicted tracks at the point where the break occurred.

**Figure 18.**The track breaking during a speed maneuver. The blue and brown lines show the real and predicted tracks, respectively. The figure in the upper left compares the real and predicted tracks at the point where the break occurred during the maneuver.

Track | Weighting Parameters | |
---|---|---|

Forward Prediction Track | Reverse Prediction Track | |

Line 1 | 0.445962 | 0.554038 |

Line 2 | 0.633892 | 0.366108 |

Line 3 | 0.499259 | 0.500741 |

Track | Before Fusion | After Fusion | ||||
---|---|---|---|---|---|---|

Forward Prediction Track | Reverse Prediction Track | Final Track | ||||

RMSE (°) | MAE (°) | RMSE (°) | MAE (°) | RMSE (°) | MAE (°) | |

Line1 | 0.000635 | 0.000435 | 0.000860 | 0.000826 | 0.000401 | 0.000374 |

Line2 | 0.003054 | 0.001412 | 0.007917 | 0.007024 | 0.002614 | 0.002222 |

Line3 | 0.000648 | 0.000538 | 0.001013 | 0.000842 | 0.000521 | 0.000399 |

Method | ||||||
---|---|---|---|---|---|---|

Vessel Track | Metric | EKF | Seq2Seq | LSTM | Bi-LSTM | Ours |

Line1 | RMSE ($\xb0$) | 0.002316 | 0.000568 | 0.001737 | 0.001697 | 0.000401 |

MAE ($\xb0$) | 0.002251 | 0.000464 | 0.001534 | 0.001400 | 0.000374 | |

MRE (km) | 0.223358 | 0.078246 | 0.194443 | 0.160217 | 0.064508 | |

Line2 | RMSE ($\xb0$) | 0.005326 | 0.005486 | 0.006793 | 0.006516 | 0.002614 |

MAE ($\xb0$) | 0.004336 | 0.004574 | 0.005177 | 0.005497 | 0.002222 | |

MRE (km) | 0.263674 | 0.271927 | 0.330217 | 0.310274 | 0.140148 | |

Line3 | RMSE ($\xb0$) | 0.001657 | 0.001238 | 0.000991 | 0.002391 | 0.000521 |

MAE ($\xb0$) | 0.001540 | 0.011051 | 0.000776 | 0.002151 | 0.000399 | |

MRE (km) | 0.176541 | 0.135673 | 0.094973 | 0.241717 | 0.077558 |

**Table 4.**Comparison of the proposed method with the other four baseline methods in terms of RMSE, MAE and MRE.

Method | |||||
---|---|---|---|---|---|

Metric | EKF | Seq2Seq | LSTM | Bi-LSTM | Ours |

RMSE (°) | 0.004486 | 0.004156 | 0.007242 | 0.004225 | 0.002728 |

MAE (°) | 0.003533 | 0.003454 | 0.006201 | 0.004225 | 0.002504 |

MRE (km) | 0.253712 | 0.219997 | 0.353712 | 0.235016 | 0.156753 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, L.; Zhang, J.; Niu, J.; Wu, Q.M.J.; Li, G.
Track Prediction for HF Radar Vessels Submerged in Strong Clutter Based on MSCNN Fusion with GRU-AM and AR Model. *Remote Sens.* **2021**, *13*, 2164.
https://doi.org/10.3390/rs13112164

**AMA Style**

Zhang L, Zhang J, Niu J, Wu QMJ, Li G.
Track Prediction for HF Radar Vessels Submerged in Strong Clutter Based on MSCNN Fusion with GRU-AM and AR Model. *Remote Sensing*. 2021; 13(11):2164.
https://doi.org/10.3390/rs13112164

**Chicago/Turabian Style**

Zhang, Ling, Jingzhi Zhang, Jiong Niu, Q. M. Jonathan Wu, and Gangsheng Li.
2021. "Track Prediction for HF Radar Vessels Submerged in Strong Clutter Based on MSCNN Fusion with GRU-AM and AR Model" *Remote Sensing* 13, no. 11: 2164.
https://doi.org/10.3390/rs13112164