3.3.1. Ablation Experiments
To evaluate the performance of each major network component designed in this study for TC track prediction, we conducted ablation experiments with different module combinations. The evaluation metrics include MAE, RMSE.
Table 3 presents the results of the ablation study. Taking the 6-h TC prediction experiment as an example, the prediction performance improves significantly as more modules are added to the model configuration. When the LSTM module is used alone, it can handle short-term dependencies in TC track. However, the MAE and RMSE values for both longitude and latitude predictions remain relatively high, indicating that the model has considerable errors in capturing the dynamic turning in TC track. In contrast, the Informer model demonstrates significant advantages in the 6-h TC track prediction task due to its powerful ability to capture global dependencies. Notably, the prediction performance shows clear improvements in RMSE, indicating that the Informer model is more effective in capturing the long-term trends of TC track. By combining the Informer and LSTM modules, the prediction accuracy is further improved, particularly with significant reductions in MAE and RMSE values. This demonstrates that the integration of both modules effectively compensates for the limitations of individual models in capturing short-term and long-term dependencies. When the LSTM module is combined with the FECAM block, the model’s capability to handle the periodic change of meteorological factors patterns in TC track is further enhanced, resulting in lower RMSE values. This highlights FECAM’s role in improving the model’s sensitivity to the cyclical fluctuations of the meteorological environment. Ultimately, the combination of the Informer, LSTM and FECAM achieves the best performance across all evaluation metrics, especially with notable improvements in metrics.
In summary, with the continuous optimization of the model architecture and the enhancement of its components, the accuracy of TC track prediction has been significantly improved. Our proposed model (Informer + LSTM + FECAM) achieves the best prediction performance across all evaluation metrics. The results presented in
Table 3 validate the effectiveness of each component in the LFInformer model. As more modules are integrated, the model performance improves consistently, fully demonstrating the synergistic effects among the components and providing strong support for further enhancing prediction accuracy. In addition, the visual comparison in
Figure 7 highlights the path prediction differences across module combinations. An inset around the turning point highlights the differences, where other module combinations show larger deviations. The proposed model aligns best with the actual track, visually validating its superior performance.
3.3.2. Comparison Experiment
This section presents a performance comparison between the LFInformer model and traditional deep learning methods (such as RNN, GRU, LSTM, and Transformer) in TC track prediction. To comprehensively evaluate the predictive capabilities of each model, we selected multiple forecasting time steps (6 h, 12 h, 24 h, and 48 h) and calculated the Absolute Position Error (APE) for each method. The main objective of this experiment is to verify whether the LFInformer model can achieve more accurate TC track predictions than traditional methods across different time scales, particularly in longer-term forecasts.
As shown in
Table 4, the LFInformer model achieved APE values of 72.39 km and 117.72 km for the 6-h and 12-h forecasts, respectively, which are significantly lower than those of other models. For instance, the APE of the RNN model was 106.76 km for the 6-h prediction, while GRU, LSTM, and Transformer recorded 94.81 km, 89.68 km, and 86.73 km, respectively. These results indicate that LFInformer demonstrates outstanding performance in short-term predictions, effectively capturing local variations in TC track. As the prediction horizon increases, although all models experience a rise in prediction error, LFInformer maintains relatively stable performance. In the 48-h forecast, LFInformer achieves an APE of 168.64 km, which is slightly lower than that of LSTM and Transformer, showcasing its advantage in long-term track prediction.To further illustrate the model’s prediction performance,
Figure 8 presents the APE of LFInformer at each forecast step, showing its stable accuracy across time and its ability to maintain low errors during sudden track changes.
Furthermore, we analyze and compare the multi–lead-time errors.
Table 5 reports the APE at 12, 24, and 48 h for the proposed model and the traditional baselines. At short and medium leads, our errors are still higher than recent operational forecasts, reflecting the advantage of operations in real-time data assimilation and multi-source predictors. At 48 h, however, the proposed model attains 168.64 km, already close to operational skill and lower than all learning baselines, indicating clear potential for longer-range track prediction.
In summary, LFInformer demonstrates high prediction accuracy across all forecast horizons for TC track prediction, with particularly superior performance in short-term forecasts.
Table 6 presents a comparison of the objective evaluation metrics—MAE, RMSE, and MAPE—across the RNN, GRU, LSTM, Transformer, and LFInformer models. Taking the 6-h forecast of tropical cyclone positions as an example, the best results are highlighted in bold. As shown in
Table 6, the LFInformer model consistently outperforms the other models across all three metrics. Specifically, LFInformer achieves a MAE of 0.555, RMSE of 0.623, and MAPE of 0.585, which are significantly lower than those of RNN and GRU. Compared to LSTM and Transformer, LFInformer also shows notable advantages in MAE and RMSE, indicating its superior ability to capture the dynamic variations in TC track and reduce prediction errors.
Additionally, LFInformer excels in the MAPE metric, with a value of 0.585, which is significantly lower than that of LSTM and Transformer. This demonstrates that LFInformer has a strong ability to capture relative errors in TC track predictions. LFInformer not only performs excellently in short-term predictions but also consistently maintains lower error values in medium- and long-term predictions, with a particularly notable improvement in MAPE.
Based on the analysis of these experimental results, LFInformer outperforms traditional deep learning methods across multiple evaluation metrics, demonstrating its superiority in the TC track prediction task. This confirms that by combining the long-sequence processing capabilities of Informer, the short-term handling capabilities of LSTM, and the modeling of periodic change through FECAM, the model can effectively improve the accuracy and stability in capturing TC track variations.
Taking the prediction of the TC’s position for the next 48 h as an example,
Table 7 presents a comparison of the RNN, GRU, LSTM, Transformer, and LFInformer models based on objective evaluation metrics such as MAE, RMSE, and MAPE, analyzing their predictive performance in both longitude and latitude directions, This indicates that while traditional models perform well in short-term predictions, their accuracy significantly decreases over longer time horizons, such as in the 48-h TC track prediction. Particularly in the latitude direction, the MAPE of Transformer and LSTM are 3.221 and 3.132, respectively, while LFInformer achieves a lower value of 2.276, demonstrating a significant improvement in prediction accuracy.
Specifically, LFInformer exhibits lower MAE, RMSE, and MAPE values in both the longitude and latitude directions compared to other models. In particular, in terms of MAPE, LFInformer achieves the lowest values of 2.557 and 2.276 in longitude and latitude, respectively, while the MAPE of other models is generally higher than that of LFInformer, indicating LFInformer’s outstanding performance in reducing relative errors. This further demonstrates that LFInformer, by integrating the advantages of LSTM and Informer and introducing the FECAM, can effectively improve the accuracy of TC track prediction, especially in long-term prediction tasks.
Through comparative analysis, LFInformer demonstrated stronger resistance to interference and better performance in the 48-h prediction task. Compared to deep learning models such as LSTM and Transformer, it provides more accurate predictions. Experimental results show that LFInformer not only effectively improves prediction accuracy but also maintains reasonable control over computational complexity, making it suitable for long-term TC track prediction tasks in practical applications.
Beyond prediction accuracy, we further evaluated the model’s efficiency by measuring the average inference time per sample. As shown in
Table 8, the LFInformer achieves an inference latency of 5.89 ms. While this incurs a slight computational cost compared to the baseline Informer (4.85 ms) attributed to the integration of the FECAM and LSTM components, it remains substantially faster than the standard Transformer (12.56 ms). This validates the design choice of using an Informer backbone to ensure high efficiency without compromising the enhanced predictive capability.
3.3.3. Visualization Experiment
To further demonstrate the performance of the proposed model in TC track prediction, particularly in predicting track mutation points, we used visualization to compare the predictions of LSTM, GRU, LFInformer, and the observed cyclone track for 6-h, 12-h, and 48-h forecasts.
Figure 9,
Figure 10 and
Figure 11 show the predicted track for TCs “Shanshan” (2018) and “Cimaron” (2018).
In the 6-h prediction results, all models broadly reproduced the synoptic-scale evolution of TC trajectories; in particular, for TCs ”Shanshan” and “Cimaron”, The predicted tracks closely matched the observed paths, whereas LFInformer achieved the lowest overall displacement error, indicating more accurate short-term TC track prediction and better ability to capture the sudden track changes in the TC track.
In the 12-h prediction results, although all models showed some deviation in the predicted track, LFInformer still managed to capture the TC track’s trend more effectively. For TC “Cimaron”, LFInformer stood out, showing the smallest deviation from the actual track, and for TC “Shanshan”, LFInformer demonstrates superior performance in medium-range forecasts, capturing localized dynamic turns in TC tracks more accurately than competing models.
For the 48-h prediction results, all models started to show significant deviations in their predicted tracks, especially for TC “Shanshan”, where the predictions from other models deviated notably from the actual track. However, LFInformer still performed relatively stably, particularly in the case of TC “Cimaron”, where LFInformer’s predicted track had the smallest difference from the observed track. This demonstrates LFInformer’s strong stability and error saturation in long-term time series predictions.
3.3.4. Research on Critical Turning Points
The above results validate that the LFInformer model, through the synergistic design of the FECAM and the ProbSparse Self-Attention mechanism, significantly improves its ability to predict sudden turning in TC track. To further demonstrate the model’s attention to track turning points, three representative cases were selected, all of which historically exhibited pronounced track deviations: TC “Hagibis” (2019) [
37], TC “Lekima” (2019) [
38], and TC Maria (2018) [
39]. Among them, the turning point of “Lekima” occurred over the ocean, that of “Hagibis” occurred just before landfall, and that of Maria occurred after landfall. As shown in
Figure 12, the proposed model exhibits notable advantages in predicting track turning points, which are marked by red pentagrams.
For example, in the northwestward turning event of TC “Lekima” shown in
Figure 12a, the unexpected breakdown of the subtropical high led to a sudden turning in the TC’s movement direction. Through the collaborative mechanism of FECAM and the ProbSparse Self-Attention, the LFInformer model is able to detect subtle turning in the ridge line of the subtropical high in advance. As a result, the predicted track closely follows the actual track near the turning point, demonstrating strong robustness. In contrast, LSTM and GRU fail to reflect the shift in TC trends, as their predictions continue to follow a northward inertial track due to their neglect of the periodic change of atmospheric circulation. This case highlights that the ProbSparse Self-Attention mechanism in the proposed model effectively selects the most relevant time steps from long historical sequences through a sparsity strategy, thereby enhancing prediction accuracy. In the case of TC “Hagibis” shown in
Figure 12b, the TC’s track exhibited a significant northeastward turn during the forecast period due to the sudden intensification of the westerly jet stream [
40]. The predicted track of the proposed LFInformer model (blue) almost completely overlaps with the actual track (red) at this turning point, demonstrating accurate detection of the sudden turning of track. This performance is attributed to the model’s internal frequency-domain enhancement module, which amplifies the periodic features of the westerly trough, and the ProbSparse Self-Attention mechanism, which emphasizes critical turning points. In contrast, LSTM and GRU, limited by their short-term memory characteristics, failed to effectively respond to the long-term evolution of the circulation system, resulting in continued northwestward deviation after the turning point and significant departure from the actual track. In the case of TC “Maria” shown in
Figure 12c, the TC experienced a turning event after landfall due to the influence of inland convection, affecting pressure, wind speed, and wind direction. The proposed model still managed to accurately track and predict this turning point.
Figure 13 compares the predicted tracks of LSTM, GRU, and the proposed LFInformer with the observed track for TC “Lionrock”. As a canonical abrupt re-curvature event, “Lionrock” provides a stringent test of a model’s sensitivity to turning cues along the track. The cyclone exhibited an abrupt northward re-curvature over the northern South China Sea. While all baselines roughly follow the pre-turn southwesterly leg, they fail to respond promptly to the regime shift at the second turning point. Following the turning point, the LSTM and GRU models tend to retain a west–northwest heading for several steps, which manifests as a left-sided cross-track bias with respect to the observed track. By contrast, LFInformer changes heading earlier and keeps closer to the observed segment after the turning, yielding a visibly smaller cross-track error. In the inset around the second turning point. The enlargement highlights that LFInformer begins its northward adjustment within one–two time steps of the observed turn and then stays nearly on the right of the observed track, avoiding the overshoot seen in the baselines. This behavior is consistent with our design: the FECAM block enhances frequency-domain cues associated with the evolving steering flow, and the ProbSparse self-attention selectively focuses on the most informative historical steps, enabling earlier and more robust detection of the turning signal.
The above experiments demonstrate that the proposed LFInformer performs excellently in predicting complex TC movement track. The predicted tracks of the model closely match the actual track at turning points, further validating the effectiveness of the FECAM and ProbSparse Self-Attention mechanisms in joint modeling across frequency and time domains. The synergistic effect of the Frequency-domain Enhanced Channel Attention Mechanism and the ProbSparse Self-Attention mechanism provides a reliable theoretical foundation and technical support for accurately forecasting sudden turning in TC tracks.