Next Article in Journal
An Enhanced Low-Power Ultrasonic Bolt Axial Stress Detection Method Using the EMD-ATWD Algorithm
Previous Article in Journal
Analysis of Fatigue Behavior of 66 kV Dry-Type Submarine Cable for a Flexible Pull-In Installation System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DSformer for Ship Motion Prediction: A Statistics-Driven Framework with Environment-Adaptive Hyperparameter Tuning

1
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
2
Beijing Institute of Mechanical Engineering, Beijing 100854, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Mar. Sci. Eng. 2026, 14(3), 244; https://doi.org/10.3390/jmse14030244
Submission received: 18 December 2025 / Revised: 15 January 2026 / Accepted: 22 January 2026 / Published: 23 January 2026
(This article belongs to the Section Ocean Engineering)

Abstract

Given the central importance of maritime logistics to global trade, accurate and efficient vessel motion forecasting is essential for strengthening supply chain resilience and improving operational efficiency. However, traditional physical and statistical models often fail to effectively capture the multivariate, noisy, and strongly coupled nature of maritime dynamics. In this manuscript, we adapt the DSformer architecture for ship motion forecasting, leveraging its dual sampling and dual-attention design to address the multi-scale and cross-variable dependencies inherent in maritime data. Across three real-world datasets, the adapted DSformer reduces prediction error by 23% and training time by 70% compared with 13 state-of-the-art (SOTA) baselines. Moreover, we identify a consistent relationship between sampling strategies and sea states, where dense sampling performs best under stable conditions, whereas moderately sparse sampling with multi-head attention improves robustness under turbulent environments. These results apply the algorithm’s new capabilities to the daily management of maritime logistics. By adapting the architecture to real-world operational settings and optimizing its key parameters, the approach enables efficient, real-time vessel forecasting and decision support across global supply chains.

1. Introduction

With the continued acceleration of economic globalization, cross-border trade has become increasingly interconnected, time-sensitive, and complex. Modern supply chains span continents, linking production facilities, distribution hubs, and consumer markets into vast networks whose stability depends on the seamless flow of logistics. Maritime transport underpins the global trading system, with international shipping accounting for more than 80% of world trade by volume [1], making the stability of international commerce and the broader global economy critically dependent on maritime logistics. From containerized consumer goods to essential bulk commodities such as energy resources and agricultural products, maritime operations play a decisive role in shaping the resilience and competitiveness of global supply chains.
At the same time, this central role exposes maritime transport to increased vulnerability under dynamic ocean conditions. Excessive rolling or pitching can destabilize cargo, forcing vessels to reduce speed or alter routes, thereby disrupting shipping schedules [2]. Severe vessel motions, such as excessive roll, pitch, and heave induced by waves and wind, may lead to container displacement, higher insurance claims, and increased operating costs. Here, roll refers to the rotational motion about the longitudinal axis that can cause lateral cargo instability, pitch denotes the rotational motion about the transverse axis affecting longitudinal load balance, and heave represents the vertical oscillatory motion that increases dynamic loading on cargo securing systems. Persistent oscillations further elevate fuel consumption and complicate berthing operations, triggering port congestion and demurrage. For perishable or high-value cargoes, including food, pharmaceuticals, and energy products, such uncertainties result in significant financial losses and may even pose humanitarian risks. Consequently, effective global logistics management requires not only addressing the technical challenges of vessel motion prediction but also recognizing its strategic importance in ensuring resilient, efficient, and reliable supply chains.
Despite its importance, ship motion forecasting remains a formidable challenge. Accurately modeling complex maritime environments while satisfying the high precision requirements of modern logistics continues to be difficult for existing approaches.
Existing studies on ship motion forecasting can be broadly categorized into physics-based, statistical, and data-driven methods. Physics-based models provide physically interpretable insights into vessel motion and maneuvering behaviors. For instance, Perera and Soares integrated an extended Kalman filter with a physics-based ship maneuvering model to estimate and predict vessel motion states, demonstrating the effectiveness of combining physical dynamics with statistical filtering for capturing nonlinear ship behavior [3]. However, such physics-based or physics-informed approaches often rely on computationally intensive simulations and detailed environmental inputs, which limits their scalability and practicality for real-time deployment in complex maritime environments. Statistical and traditional machine learning methods offer computational efficiency and interpretability. For example, Ma et al. investigated the application of ARIMA models for ship trajectory prediction, demonstrating the effectiveness of classical statistical approaches in capturing short-term trends and local dynamic patterns in vessel motion [4]. In addition, Zheng et al. proposed a support vector machine (SVM)-based collision risk assessment method for maritime traffic, illustrating the practicality of conventional machine learning techniques in safety-related decision-making tasks [5]. However, these methods typically rely on linear assumptions or shallow model structures, which restricts their ability to capture complex nonlinear dynamics and cross-variable interactions in highly coupled maritime environments. More recently, deep learning approaches, particularly recurrent architectures such as RNNs, LSTMs, and GRUs, have improved temporal modeling performance and become common baselines for ship motion prediction. For example, Murray and Perera proposed a data-driven framework that leverages historical AIS data to predict future vessel behavior, demonstrating the capability of recurrent neural networks to model sequential dependencies in maritime traffic patterns [6]. Similarly, Wang et al. developed single-input single-output (SISO) and multi-input single-output (MISO) deep learning models for ship roll motion prediction, highlighting the effectiveness of recurrent architectures in learning temporal dynamics from multivariate ship motion signals [7]. Nevertheless, these recurrent models are prone to error accumulation and gradient vanishing when applied to long prediction horizons, which undermines their effectiveness in capturing long-term temporal dependencies.
To address these limitations, we propose DSformer [8], a non-recurrent, data-driven framework for ship motion prediction. By integrating a dual-sampling mechanism that captures both global trends and local fluctuations with a time-varying variable attention module, DSformer explicitly models long-range temporal dependencies and cross-variable interactions in multivariate ship motion time series.
Specifically, DSformer represents the first ship motion prediction framework that completely abandons recurrent architectures, thereby avoiding error accumulation and vanishing gradient issues commonly observed in existing deep learning models. In addition, its data-driven dual-attention mechanism is designed to jointly capture temporal autocorrelation and cross-variable dependencies. Furthermore, the proposed framework is validated using real-world datasets encompassing real sea states rather than relying solely on simulated environments, demonstrating robust predictive performance under realistic operating conditions.
The proposed framework is designed to support ship motion forecasting across different temporal horizons, thereby enabling a wide range of practical maritime applications.
For short-term forecasting tasks that demand high precision, such as those observed in electricity load balancing, traffic flow monitoring, and short-horizon weather prediction, accurate modeling requires sensitivity to rapid temporal fluctuations and abrupt dynamic changes. This requirement is addressed through its local sampling mechanism and time-focused attention, which emphasize short-term temporal dependencies and capture high-frequency variations present in these benchmark datasets [8].
For long-term forecasting tasks in domains such as taxation analysis, energy demand forecasting, and disease monitoring, reliable prediction depends on capturing global trends, periodic patterns, and accumulated temporal effects over extended horizons. The global sampling strategy and channel-focused attention of the proposed model enable the model to learn long-range temporal dependencies and cross-variable interactions, providing stable and consistent forecasts that are essential for long-term planning and policy-oriented decision making [8].
By jointly modeling short-term dynamics and long-term temporal structures within a unified non-recurrent framework, the proposed approach offers a flexible and scalable solution that bridges operational safety requirements and strategic logistics optimization in maritime transportation.
Compared with existing machine learning models, the proposed model offers a more reliable and scalable solution for maritime applications by enabling stable long-horizon prediction under real sea conditions, which is critical for operational decision-making in shipping and logistics.
In summary, while this study primarily addresses the technical challenges of ship motion forecasting through advanced machine intelligence, it offers a vital perspective on operational efficiency and safety. By providing high-precision predictions, our model establishes a technical foundation for critical maritime scenarios. Specifically, it has the potential to identify safe operational windows for helicopter landings and offshore wind farm maintenance, assist harbor pilots during complex berthing maneuvers, and mitigate risks of cargo damage due to parametric rolling. This work illustrates that accurate vessel motion prediction is not merely a theoretical pursuit but a key enabler for smarter, safer, and more efficient maritime logistics.

2. Materials and Methods

2.1. DSformer

Accurate ship motion prediction demands models that can simultaneously capture both long-term trajectory trends and fine-scale short-term dynamics, while leveraging correlations across multi-sensor streams under uncertain marine environments. To address this challenge, we propose DSformer, a dual-structured architecture that integrates a Dual Sampling Module and a Dual-Focus Module for hierarchical feature extraction and fusion.
As shown in Figure 1, the Dual Sampling Module simultaneously encodes global and local motion patterns through down-sampling and segment sampling. The former emphasizes low-frequency signals to capture global voyage trends, while the latter preserves high-frequency signatures linked to short-term maneuvers. Within the Dual-Focus Module, temporal attention integrates these complementary representations to uncover latent temporal dependencies, while variable attention captures cross-sensor correlations (e.g., between speed, heading, and wind). This dual-attention mechanism enables robust state representation across varying sea conditions.
The fused features are first aggregated through normalization and residual operations, and subsequently decoded by a multilayer perceptron (MLP) to produce final predictions. The hierarchical and parallel architecture not only enhances predictive accuracy and stability but also improves scalability and facilitates optimization on real-world ship datasets.

2.2. Dual Sampling Module

The primary function of this module is to transform the original multivariate time-series data X = x 11 x 12 x 1 D x 21 x 22 x 2 D x V 1 x V 2 x V D R V × D , where V represents the number of variables and D denotes the number of time steps, into two three-dimensional feature tensors X r s R V × N × D N and X p s R V × N × D N (Here, the subscripts “ r s ” and “ p s ” denote Reduction Sampling and Partitioned Sampling, respectively.), each with a distinct emphases, where N is the number of transformed subsequences. In ship motion prediction, X r s is more suitable for capturing the macro trends of overall ship movement, while X p s highlights local maneuver details. Figure 2 illustrates the two sampling methods.
Reduction Sampling: As shown in Figure 2a, for the i-th variable, i 1 , 2 , 3 , , V , the raw sequence with an initial length of D is processed using Reduction Sampling to obtain N one-dimensional subsequences, each of length D N . This operation yields a set of subsequences that span extended temporal intervals. To mitigate the information loss introduced by Reduction Sampling, these N subsequences are concatenated to form a feature matrix X r s i R N × D N . Finally, stacking the feature matrices from V variables yield the three-dimensional feature tensor X r s R V × N × D N . These subsequences are evenly distributed over time, emphasizing long-term trends and mitigating the interference of local noise. Each subsequence characterizes the ship’s motion pattern over an extended temporal scale. For the j-th subsequence, its main composition is defined as follows:
X r s i j = x i j , x i j + C , x i j + 2 C , , x i j + D / N 1 C
Partitioned Sampling: As shown in Figure 2b, Partitioned Sampling divides the original sequence into N segments based on temporal continuity, with each segment containing a consecutive data length of D N . For the i-th variable, i 1 , 2 , 3 , , V , these N segments are concatenated to form a feature matrix X p s i R N × D N , to avoid potential information loss. This process is critical for capturing the fine-grained maneuvers and short-term environmental disturbances of the ship. By combining the feature matrices X p s i R N × D N from V variables, the final tensor is denoted as X p s R V × N × D N . During this process, each segment preserves its local continuity, enabling a comprehensive representation of local features. For the j-th subsequence, its main composition is summarized as follows:
X p s i j = x i ( 1 + j 1 × D / N ) , x i ( 2 + j 1 × D / N ) , x i ( 3 + j 1 × D / N ) , , x i ( j × D / N )
In summary, the Dual Sampling Module effectively extracts global trend information while preserving local detailed features, thereby providing rich and complementary input for the subsequent exploration by the Dual-Focus Module.

2.3. Dual-Focus Module

Ship motion data exhibit pronounced temporal autocorrelation and cross-variable dependencies, offering inherent structural priors that can be used in model design. For example, the sustained positive autocorrelation over hundreds of time steps in Figure 3 motivates modeling long-range temporal dependencies, while the strong positive and negative correlations among motion and velocity variables in Figure 4 motivate explicit cross-variable interaction modeling.
Figure 3 illustrates the autocorrelation structure across key ship motion variables, revealing clear periodicity and long-range dependencies over time. This indicates that temporal dynamics are not random fluctuations but rather coherent patterns driven by vessel motion and environmental forces.
Figure 4 further shows the cross-variable correlation matrix highlights strong couplings among different physical quantities (e.g., yaw, pitch, roll angles and velocities). Such correlations reflect the intrinsic coupling among different degrees of freedom in ship dynamics and the environmental conditions that govern them. Since the dataset contains long continuous time series, even weak correlations tend to be statistically significant. Therefore, we focus on the magnitude and sign of the correlation coefficients to highlight practically meaningful inter-variable dependencies rather than reporting p-values.
Building on these data-driven observations, the Dual-Focus Module incorporates two complementary attention mechanisms: the Time-Focused Path, which learns long-range dependencies along the time axis, and the Channel-Focused Path, which captures cross-variable interactions.
This dual-attention design enables the model to effectively leverage the intrinsic structure of maritime time-series data, capturing both the dynamic evolution of vessel motion over time and the inter-variable coupling that arises under complex sea conditions.
The Dual-Focus Module comprises two parallel components, the Time-Focused Path and the Channel-Focused Path, which work together to model temporal dynamics and inter-variable interactions. The Time-Focused Path captures contextual dependencies along the temporal axis, revealing periodic patterns and long-term trends in navigation trajectories. In parallel, the Channel-Focused Path models intrinsic relationships among variables and extracts key interaction signals that are essential for accurate ship state estimation from heterogeneous sensor streams. The outputs of both paths are subsequently fused and normalized before being projected into a two-dimensional tensor for MLP-based decoding.
Time-Focused Path: The tensors X r s R V × N × D N and X p s R V × N × D N generated by the Dual Sampling Module are used as inputs. For clarity, the computation is illustrated using X r s ; the same procedure is applied to X p s , to obtain X r s T A and X p s T A . In the Time-Focused Path, the three-dimensional input is linearly transformed to generate the query, key, and value matrices. A multi-head temporal attention mechanism then computes the similarity between the query and key matrices and applies weighted aggregation over the value to produce an initial temporal representation:
Q = F C X r s
K = F C X r s
V = F C X r s
Attention Time = softmax Q K T / d k V
Here, FC . represents a fully connected layer, and softmax . denotes the normalized exponential function.
Residual connections and layer normalization are applied to further refine this representation:
X r s T A = LayerNorm ( X rs + Attention Time ( Q , K , V ) ) R V × N × D N
This helps maintain stable gradients and consistent feature distributions across samples. By capturing long-range temporal dependencies, the Time-Focused Path encodes navigation periodicity, environmental fluctuations, and dynamic ship responses.
Channel-Focused Path: The tensors X r s R V × N × D N and X p s R V × N × D N from the Dual Sampling Module serve as inputs. For clarity, the computation is illustrated using X r s ; the same procedure is applied to X p s , yielding X r s V A and X p s V A . The Channel-Focused Path extracts inter-variable dependencies across sensor streams by treating the variable dimension as a sequence for attention computation. Since raw sensor data are stored in a fixed order that is unsuitable for attention operations, we first reorder the variable dimension, enabling each sensor representation to participate independently in the attention process. This transformation ensures accurate similarity estimation among variables and prevents information entanglement due to improper data arrangement.
To make the reordering operation mathematically explicit, the reduction-sampled tensor X r s R V × N × D N is permuted by moving the variable dimension to the last axis. After this permutation, X r s ˜ is represented as a tensor of shape D N × N × V . In this representation, each variable corresponds to one attention token, while the subsequence and temporal dimensions jointly form the feature embedding associated with each token.
Given the reordered input X r s ˜ R D N × N × V , we apply a linear projection to produce.
Q = F C X r s ˜
K = F C X r s ˜
V = F C X r s ˜
and compute multi-head attention along the variable dimension:
Attention Variable = softmax Q K T / d k V
Unlike the Time-Focused Path, this component employs only the attention layer to reduce computational complexity while capturing latent correlations among sensor channels. The resulting representation emphasizes inter-variable interactions, such as those between roll and pitch, that are crucial for accurate ship state estimation.
Based on X r s T A R V × N × D N , X r s V A R V × N × D N , X p s T A R V × N × D N , and X p s V A R V × N × D N , the fused features are then obtained using the following formulas, respectively:
X F r s = F C LayerNorm X T A r s + X V A r s
X F p s = F C LayerNorm X V A p s + X T A p s
Information Flow: The outputs X F r s R V × D N and X F p s R V × D N from the Dual-Focus Module are first fused through layer normalization, producing a two-dimensional tensor X F that encodes both global and local temporal structures, as well as cross-variable relationships:
X F = LayerNorm X F r s + X F p s
This integration allows the subsequent prediction to directly utilize the refined high-level representations for accurate forecasting. Specifically, an MLP decoder maps the refined high-level features to the future ship state:
Y ^ = MLP ( X F )

2.4. Loss Function

To improve convergence and robustness, a hybrid loss combining L 1 and L 2 objectives is used during training. The L 1 term emphasizes absolute deviations, which increases resistance to outliers and allows the model to capture abrupt state transitions in ship dynamics [9]. In contrast, the L 2 term minimizes the mean squared error, encouraging smooth and stable long-horizon forecasts [10]. The joint loss function is defined as:
L = λ L 1 + ( 1 λ ) L 2
where λ controls the trade-off between local robustness and global stability.
In this study, the weighting factor λ is treated as a fixed hyperparameter that balances robustness and smoothness. Unless otherwise specified, λ is set to 0.5 for all experiments, assigning equal importance to the L 1 and L 2 components. This choice follows common practice in hybrid loss design for time-series forecasting and robust regression, where equal weighting provides a stable and reliable trade-off between sensitivity to transient disturbances and overall prediction smoothness.
We further verified that the proposed model is not highly sensitive to moderate variations of λ . Preliminary experiments with λ values ranging from 0.3 to 0.7 exhibited similar convergence behavior and prediction accuracy across all three datasets. As a result, a fixed λ is adopted for all datasets to ensure experimental consistency and to avoid dataset-specific tuning, which could otherwise obscure the evaluation of the proposed model.

3. Results

All experiments were implemented using the open-source time-series forecasting framework BasicTS (https://github.com/GestaltCogTeam/BasicTS (accessed on 15 November 2025)), with customized model configurations and extensions to support multivariate ship motion data. The efficacy of the proposed method has been verified under testing conditions distributionally similar to the training phase, ensuring reliable performance within the intended operational domain.

3.1. Experimental Setup

Dataset. To rigorously evaluate the performance of DSformer, we conducted comparative experiments on three real-world ship motion datasets, denoted as Env01, Env02, and Env03 (see Table 1). All datasets were collected from the same vessel to ensure consistent platform characteristics while exposing the model to diverse environmental and operational conditions.
The data were acquired from a 20-ton class vessel with a length of 17 m and a beam of 3 m. A medium-precision fiber-optic inertial navigation system (INS) was installed near the vessel’s center of mass to measure six-degree-of-freedom (6-DOF) motion responses. The INS was a commercially available, factory-calibrated system, and all sensor outputs were recorded using a dedicated onboard data acquisition computer at a sampling frequency of 100 Hz.
All motion quantities were expressed in a body-fixed coordinate frame, where the origin is located at the vessel’s center of mass. The X-axis points forward along the vessel’s longitudinal direction, the Y-axis points to starboard along the transverse direction, and the Z-axis points upward along the vertical direction. The measured motion variables include: (1) roll, pitch, and yaw angles; (2) roll, pitch, and yaw angular rates; (3) surge, sway, and heave velocities. These nine variables jointly characterize the vessel’s translational and rotational motion dynamics.
The three datasets correspond to different environmental conditions and sea states. A summary of the environmental parameters, including wind speed, wind direction, water speed, temperature, humidity and sea conditions, is provided in Table 2. Each dataset was segmented into training, validation, and test subsets using an 8:1:1 ratio.
Table 2 summarizes the average environmental conditions of the three datasets. Compared to Env01 and Env02, Env03 experiences substantially higher wind speeds, elevated humidity levels, and minimal current velocities. Such conditions amplify nonlinearly coupled and high-intensity vessel responses [11], increasing the complexity of modeling and prediction. In contrast, Env01 and Env02 were collected under more moderate and balanced conditions, resulting in smoother temporal dynamics [12].
Beyond average environmental levels, dataset complexity in this study is primarily defined in terms of the dynamic variability and non-stationarity of ship motion responses, which directly determine the difficulty of motion forecasting. To quantitatively assess this aspect, we analyze the short-term variation intensity of key motion variables by examining the distributions of the absolute first-order differences in pitch rate and roll rate.
As illustrated in Figure 5, Env03 exhibits significantly broader distributions and heavier tails in both | Δ Pitch Rate| and | Δ Roll Rate| compared with Env01 and Env02, indicating more abrupt temporal changes and stronger non-stationary dynamics. Such behavior reflects more irregular and volatile motion responses induced by complex environmental forcing.
Taken together, the harsher environmental conditions summarized in Table 2 and the elevated dynamic variability observed in Figure 5 jointly demonstrate that Env03 represents the most dynamically complex dataset in this study. Consequently, Env03 is treated as a more challenging benchmark for evaluating the robustness and effectiveness of ship motion forecasting models.
Summary statistics, including the mean and standard deviation (Table 3), further highlight these contrasts. The pronounced variability in both environmental and motion characteristics makes this dataset suite particular suitable for evaluating the robustness and generalization of ship motion forecasting models.
Baselines. To demonstrate the efficiency of DSformer, we selected 13 SOTA models, recognized for their strong performance in time series forecasting, as benchmarks. These include PatchTST [13], LightTS [14], Nlinear [15], Dlinear [15], CycleNet [16], Crossformer [17], iTransformer [18], SOFTS [19], Autoformer [20], Informer [21], Fredformer [22], ETSformer [23], and Pyraformer [24].
Hyperparameters. The primary hyperparameter values for DSformer are shown in Table 4. All models were implemented in PyTorch 2.0.1 and trained for a fixed number of epochs (e.g., 100), with the best-performing checkpoint selected based on validation loss. The optimizer and remaining model hyperparameters retained their default values as defined in the PyTorch and BasicTS frameworks.
Evaluation. Evaluation metrics are vital for assessing model performance. To provide a more comprehensive validation of DSformer, we adopted five metrics in our experiments: MAE, MSE, MAPE, RMSE, and WAPE.
M A E = ( 1 / n ) i = 1 n | y i y ^ i |
M S E = ( 1 / n ) i = 1 n y i y ^ i 2
M A P E = ( 100 % / n ) i = 1 n | y i y ^ i / y i |
R M S E = ( 1 / n ) i = 1 n y i y ^ i 2
W A P E = i = 1 n | y i y ^ i | / i = 1 n | y i |

3.2. Prediction Performance

Table 5, Table 6 and Table 7 summarize the predictive performance of DSformer and 13 baselines models on the Env01, Env02, and Env03 datasets, with the best scores highlighted in red. Figure 6 visualizes the results via a logarithmic radar chart. Several key observations can be made:
Consistent superiority: DSformer achieves the lowest error across all metrics on both Env01 and Env03, demonstrating its superior accuracy advantage.
Global–local balance: Models such as PatchTST and LightTS perform competitively in capturing global trends but still fall short of DSformer in overall predictive accurancy.
Architectural limitations: CycleNet, Crossformer, and iTransformer show higher errors, likely resulting from insufficient integration of local and global information.
Dual-path strength: DSformer’s dual sampling and dual-focus design enables effective extraction of multi-scale temporal and cross-variable dependencies, highlighting its superior performance in long-horizon.
Robust stability: On Env02, although Crossformer approaches DSformer on certain metrics, DSformer maintains more balanced and stable performance.
Resilience under complexity: On Env03, Fredformer slightly outperforms DSformer in MAPE; however, DSformer matches or exceeds its performance on other metrics, demonstrating greater robustness under Complex conditions.
Overall advantage: Despite close competition on certain metrics, DSformer consistently delivers the most balanced and effective feature integration, highlighting its strength in complex multivariate long-sequence prediction tasks.

3.3. Sampling Interval

The sampling interval is a key hyperparameter in DSformer, controlling the trade-off between capturing dynamic information and suppressing noise. By setting the temporal resolution of the input data, it directly influences the model’s sensitivity to high-frequency details and random disturbances. We analyze its impact on prediction performance, measured by MAPE, across different environmental conditions.
Mild environmental conditions (Env01 & Env02). Under stable conditions with low wind speeds (2.9–3.0 m/s), vessel motion exhibits regular, low-noise dynamics. Dense sampling (interval = 3) effectively captures fine-scale fluctuations, improving predictive accuracy without introducing disruptive noise. As reported in Kwon [25], benign weather conditions minimize external disturbances, making high-frequency sampling advantageous for preserving subtle vessel behaviors. Both Env01 and Env02 exhibit reduced prediction errors at dense sampling intervals, with only marginal fluctuations at longer intervals (5–6).
Complex environmental conditions (Env03). In contrast, the Env03 dataset, characterized by strong winds (8.2 m/s) and high humidity, exhibits substantial high-frequency random fluctuations. While dense sampling preserves fine-grained temporal variations, it also increases the model’s sensitivity to irrelevant perturbations, which can obscure underlying motion patterns. Previous studies [26,27] have shown that harsh environmental conditions significantly increase system uncertainty and measurement noise. A moderately sparse sampling interval (interval = 5) effectively reduces sensitivity to high-frequency disturbances while retaining dominant motion trends, thereby enhancing model robustness and improving prediction accuracy. However, excessively sparse sampling (e.g., interval = 6) begins to remove informative short-term variations, resulting in degraded predictive performance.
The sampling interval is a critical hyperparameter in DSformer, as it directly determines the temporal resolution of the input sequence and governs the trade-off between disturbance suppression and information preservation. To systematically assess its influence, we conduct an extended ablation study with sampling intervals ranging from 3 to 8 under diverse environmental conditions (see Figure 7).
The experimental results exhibit a clear unimodal performance pattern. When the sampling interval is small (e.g., 3 or 4), the model operates at high temporal resolution but remains highly sensitive to high-frequency fluctuations and measurement noise, which adversely affects long-horizon prediction stability. As the sampling interval increases, the impact of stochastic disturbances is progressively reduced, leading to improved forecasting accuracy and reaching an optimum at interval = 5.
Beyond this point, further enlarging the sampling interval (interval ≥ 6) causes a noticeable decline in performance. Excessively large intervals overly coarsen the temporal representation, resulting in the loss of informative short-term dynamics that are essential for accurately modeling ship motion responses under rapidly changing sea states. This loss of temporal detail becomes particularly pronounced in complex environments, where abrupt maneuvers and environmental disturbances play a critical role.
Overall, interval = 5 achieves a favorable balance between reducing sensitivity to high-frequency disturbances and preserving meaningful temporal information. This behavior reflects an empirically observed regularization effect induced by temporal sparsification and explains the consistently superior performance achieved at this interval across different datasets.

3.4. Efficiency

This section evaluates the operational efficiency of DSformer relative to a set of representative baseline models including PatchTST, Crossformer, Autoformer, Informer, Fredformer, ETSformer, and Pyraformer. All experiments were performed under identical conditions on an NVIDIA RTX 4090D (24 GB) GPU. We measured the average training time per epoch across the three datasets (Table 8) and integrated multiple error metrics (MAE, MSE, MAPE, and WAPE) to assess overall performance (Figure 8).
The results show that DSformer consistently achieves lower prediction errors while exhibiting the shortest training time per epoch among the compared models. However, training time alone does not fully characterize computational efficiency without considering model complexity. Therefore, we further examine the number of trainable parameters reported in Table 8.
Despite having a comparable or even larger parameter count than several baseline models, DSformer maintains the fastest training speed per epoch. This indicates that the efficiency advantage of DSformer does not arise from aggressive parameter reduction, but rather from a more efficient architectural design that enables effective utilization of model capacity.
The high efficiency primarily stems from the Dual Sampling Module, which, unlike Transformer variants that rely on complex embedding structures and thereby increase computational cost. DSformer reduces the input sequence length through two complementary strategies:
Reduction Sampling: enlarges the sampling interval to extract essential global trends while substantially reducing sequence length and computational load.
Partitioned Sampling: divides the sequence into consecutive subsequences to preserve fine-grained local dynamics while improving computational efficiency through parallel processing.
This dual module reduces redundant information, preserves critical multi-scale features, and allows downstream attention modules to operate on more compact and information-rich sequences, thereby improving computational speed and memory efficiency.
Beyond raw speed, DSformer offers practical advantages for maritime deployment:
Low latency and real-time adaptability: Rapid inference capability enables low-latency forecasting under dynamically changing sea states, supporting timely operational decision-making in practical maritime scenarios.
Low resource demand and onboard feasibility: Reduced computational overhead makes DSformer suitable for deployment on compact and resource-constrained maritime systems, which are typically required to operate under harsh, humid, and saline environmental conditions.

3.5. Qualitative Time-Series Prediction Analysis

While aggregated metrics such as MAE and RMSE provide an overall measure of prediction accuracy, they do not fully reflect a model’s ability to capture temporal dynamics and physically meaningful motion patterns. To further assess the practical reliability of the proposed method, we present a time-domain comparison between measured ship motion signals and model predictions in Figure 9.
As shown in Figure 9, DSformer produces predictions that closely follow the measured trajectories in both amplitude and phase, particularly during rapid oscillations and transition regions. In contrast, several baseline models exhibit noticeable phase lag, amplitude attenuation, or over-smoothing effects. These discrepancies, although sometimes masked in averaged error metrics, may accumulate over time and adversely affect real-world applications such as motion compensation, route planning, and control systems.

4. Conclusions

Maritime transportation remains an essential component of global logistics, yet existing physical and statistical approaches continue to struggle with capturing the complexity of multivariate, long-horizon time series under real-world sea conditions. These limitations hinder precise real-time monitoring and adaptive decision-making, both of which are critical for supply chain resilience and operational efficiency. To address this, we introduce DSformer, a ship motion forecasting architecture that combines dual sampling of global and local dynamics with temporal–variable attention to achieve deep multi-source information fusion. By effectively suppressing noise and capturing complex inter-variable couplings, this architecture contributes to advancing intelligent maritime logistics systems.
Extensive experiments conducted on real-world datasets confirm DSformer’s superior performance in prediction accuracy, noise suppression, and trend modeling across both moderate and complex sea states. Sensitivity analysis indicates that dense sampling enhances accuracy in low-noise conditions, while moderate sampling improves robustness in harsh environments, providing practical guidance for operational deployment in logistics.
Beyond predictive accuracy, DSformer demonstrates outstanding runtime efficiency, enabling near-real-time predictions and early warnings. With its low computational requirements, DSformer can be integrated into onboard systems, making it well suited for time-sensitive logistics operations, such as dynamic route adjustments, schedule optimization, and accident prevention under volatile sea states.
Future work, applying DSformer across different vessel types, routes, and environmental conditions will further increase its utility for smart shipping and global supply chain management. A systematic exploration of hyperparameter-performance interactions will inform more precise and adaptive deployment strategies, enabling low-latency, high-efficiency forecasting at scale.
In essence, DSformer not only advances multivariate long-sequence forecasting but also applies its algorithmic innovation to maritime logistics management, offering a scalable solution to enhance supply chain resilience, safety, and efficiency in the context of globalized trade.

Author Contributions

Conceptualization, Y.M. and H.G.; methodology, Y.L. and H.G.; validation, Y.M., J.L. and Y.L.; formal analysis, Y.L.; investigation, Y.M.; resources, P.B., X.P. and Z.C.; writing—original draft preparation, H.G.; writing—review and editing, Y.M.; visualization, H.G.; supervision, Y.L., P.B., X.P. and Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request, subject to any restrictions imposed by confidentiality agreements and institutional policies.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations Conference on Trade and Development (UNCTAD). Review of Maritime Transport 2022; United Nations: Geneva, Switzerland, 2022. [Google Scholar]
  2. France, W.N.; Levadou, M.; Treakle, T.W.; Paulling, J.R.; Michel, R.K.; Moore, C. An investigation of head-sea parametric rolling and its influence on container lashing systems. Mar. Technol. SNAME News 2003, 40, 1–19. [Google Scholar] [CrossRef]
  3. Perera, L.P.; Soares, C.G. Ocean vessel trajectory estimation and prediction based on extended Kalman filter. In The Second International Conference on Adaptive and Self-Adaptive Systems and Applications; Citeseer: University Park, PA, USA, 2010; pp. 14–20. [Google Scholar]
  4. Ma, W.; Pang, Y.; Zou, J.; Zhu, K. The application of ARIMA in short-timely forecasting for motion of planning boat. In Proceedings of the 2011 International Conference on Computer Science and Service System (CSSS), Nanjing, China, 27–29 June 2011; IEEE: New York, NY, USA, 2011; pp. 3651–3654. [Google Scholar]
  5. Zheng, K.; Chen, Y.; Jiang, Y.; Qiao, S. A SVM based ship collision risk assessment algorithm. Ocean. Eng. 2020, 202, 107062. [Google Scholar] [CrossRef]
  6. Murray, B.; Perera, L.P. Ship behavior prediction via trajectory extraction-based clustering for maritime situation awareness. J. Ocean. Eng. Sci. 2022, 7, 1–13. [Google Scholar] [CrossRef]
  7. Wang, Y.; Wang, H.; Zhou, B.; Fu, H. Multi-dimensional prediction method based on Bi-LSTMC for ship roll. Ocean. Eng. 2021, 242, 110106. [Google Scholar] [CrossRef]
  8. Yu, C.; Wang, F.; Shao, Z.; Sun, T.; Wu, L.; Xu, Y. DSformer: A double sampling transformer for multivariate time series long-term prediction. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 3062–3072. [Google Scholar]
  9. Huber, P.J. Robust estimation of a location parameter. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992; pp. 492–518. [Google Scholar]
  10. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Volume 4, p. 738. [Google Scholar]
  11. Chai, W.; Naess, A.; Leira, B.J. Stochastic nonlinear ship rolling in random beam seas by the path integration method. Probabilistic Eng. Mech. 2016, 44, 43–52. [Google Scholar] [CrossRef]
  12. Fossen, T.I. Handbook of Marine Craft Hydrodynamics and Motion Control; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  13. Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
  14. Campos, D.; Zhang, M.; Yang, B.; Kieu, T.; Guo, C.; Jensen, C.S. LightTS: Lightweight time series classification with adaptive ensemble distillation. Proc. ACM Manag. Data 2023, 1, 1–27. [Google Scholar] [CrossRef]
  15. Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11121–11128. [Google Scholar]
  16. Lin, S.; Lin, W.; Hu, X.; Wu, W.; Mo, R.; Zhong, H. Cyclenet: Enhancing time series forecasting through modeling periodic patterns. Adv. Neural Inf. Process. Syst. 2025, 37, 106315–106345. [Google Scholar]
  17. Zhang, Y.; Yan, J. CorssFormer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In Proceedings of the Eleventh International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  18. Liu, Y.; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; Long, M. itransformer: Inverted transformers are effective for time series forecasting. arXiv 2023, arXiv:2310.06625. [Google Scholar]
  19. Han, L.; Chen, X.Y.; Ye, H.J.; Zhan, D.C. Softs: Efficient multivariate time series forecasting with series-core fusion. arXiv 2024, arXiv:2404.14197. [Google Scholar] [CrossRef]
  20. Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
  21. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
  22. Piao, X.; Chen, Z.; Dong, Y.; Matsubara, Y.; Sakurai, Y. FredNormer: Frequency Domain Normalization for Non-stationary Time Series Forecasting. arXiv 2024, arXiv:2410.01860. [Google Scholar]
  23. Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. Etsformer: Exponential smoothing transformers for time-series forecasting. arXiv 2022, arXiv:2202.01381. [Google Scholar] [CrossRef]
  24. Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A.X.; Dustdar, S. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In Proceedings of the Tenth International Conference on Learning Representations (ICLR 2022), Virtual, 25–29 April 2022. [Google Scholar]
  25. Kwon, Y.J. The Effect of Weather, Particularly Short Sea Waves, on Ship Speed Performance. Doctoral Dissertation, Newcastle University, Newcastle upon Tyne, UK, 1981. [Google Scholar]
  26. Novaselic, M.; Mohovic, R.; Baric, M.; Grbić, L. Wind influence on ship manoeuvrability—A turning circle analysis. TransNav Int. J. Mar. Navig. Saf. Sea Transp. 2021, 15, 47–51. [Google Scholar] [CrossRef]
  27. Bitner-Gregerse, E.M.; Soares, C.G.; Vantorre, M. Adverse weather conditions for ship manoeuvrability. Transp. Res. Procedia 2016, 14, 1631–1640. [Google Scholar] [CrossRef]
Figure 1. DSformer Framework.
Figure 1. DSformer Framework.
Jmse 14 00244 g001
Figure 2. Dual Sampling Module: (a) Reduction Sampling aggregates information across the temporal dimension, resulting in a smoother representation of the input sequence. (b) Partitioned Sampling processes segmented subsequences independently, preserving local temporal variations. The bottom panels show example time-series snippets obtained from the same original signal after applying Reduction Sampling and Partitioned Sampling, respectively, illustrating their different effects on trend preservation and high-frequency variation retention.
Figure 2. Dual Sampling Module: (a) Reduction Sampling aggregates information across the temporal dimension, resulting in a smoother representation of the input sequence. (b) Partitioned Sampling processes segmented subsequences independently, preserving local temporal variations. The bottom panels show example time-series snippets obtained from the same original signal after applying Reduction Sampling and Partitioned Sampling, respectively, illustrating their different effects on trend preservation and high-frequency variation retention.
Jmse 14 00244 g002
Figure 3. Autocorrelation Coefficient for Variables: All variables exhibit strong positive autocorrelation at small lags, indicating pronounced temporal dependence. Significant correlations persist up to several hundred-time steps, followed by negative correlations at intermediate lags (approximately 200–300), which reflects oscillatory motion patterns. The re-emergence of positive correlation at larger lags suggests quasi-periodic dynamics commonly observed in ship motion responses.
Figure 3. Autocorrelation Coefficient for Variables: All variables exhibit strong positive autocorrelation at small lags, indicating pronounced temporal dependence. Significant correlations persist up to several hundred-time steps, followed by negative correlations at intermediate lags (approximately 200–300), which reflects oscillatory motion patterns. The re-emergence of positive correlation at larger lags suggests quasi-periodic dynamics commonly observed in ship motion responses.
Jmse 14 00244 g003
Figure 4. Cross-Variable Dependencies: The color scale represents the Pearson correlation coefficient, ranging from −1 (strong negative correlation) to +1 (strong positive correlation). The numerical values in each cell indicate the corresponding correlation coefficients. The correlations are computed over the entire dataset to illustrate global inter-variable relationships.
Figure 4. Cross-Variable Dependencies: The color scale represents the Pearson correlation coefficient, ranging from −1 (strong negative correlation) to +1 (strong positive correlation). The numerical values in each cell indicate the corresponding correlation coefficients. The correlations are computed over the entire dataset to illustrate global inter-variable relationships.
Jmse 14 00244 g004
Figure 5. Dynamic complexity comparison across datasets: Distributions of the absolute first-order differences in pitch rate (left) and roll rate (right) for Env01, Env02, and Env03. The magnitude of | Δ x | reflects short-term variation intensity and non-stationarity in motion responses. Env03 exhibits noticeably broader distributions and heavier tails, indicating stronger temporal variability and more abrupt motion changes. These characteristics suggest higher dynamic complexity and increased forecasting difficulty compared with Env01 and Env02.
Figure 5. Dynamic complexity comparison across datasets: Distributions of the absolute first-order differences in pitch rate (left) and roll rate (right) for Env01, Env02, and Env03. The magnitude of | Δ x | reflects short-term variation intensity and non-stationarity in motion responses. Env03 exhibits noticeably broader distributions and heavier tails, indicating stronger temporal variability and more abrupt motion changes. These characteristics suggest higher dynamic complexity and increased forecasting difficulty compared with Env01 and Env02.
Jmse 14 00244 g005
Figure 6. Distribution of Pitch Rate Data.
Figure 6. Distribution of Pitch Rate Data.
Jmse 14 00244 g006
Figure 7. Impact of Sampling Interval.
Figure 7. Impact of Sampling Interval.
Jmse 14 00244 g007
Figure 8. Comparison of Runtime for Different Models.
Figure 8. Comparison of Runtime for Different Models.
Jmse 14 00244 g008
Figure 9. Time-domain comparison between measured ship motion signals (black dashed line) and predictions produced by different baseline models and DSformer over a representative test segment. The horizontal axis denotes time steps (sampling interval: 0.01 s), while the vertical axis represents the corresponding motion variable. Compared with other models, DSformer achieves closer phase alignment and amplitude consistency with the measured signal, particularly during rapid motion transitions and oscillatory regimes. This qualitative comparison complements the aggregated error metrics by illustrating the temporal fidelity and physical plausibility of the predicted motion trajectories.
Figure 9. Time-domain comparison between measured ship motion signals (black dashed line) and predictions produced by different baseline models and DSformer over a representative test segment. The horizontal axis denotes time steps (sampling interval: 0.01 s), while the vertical axis represents the corresponding motion variable. Compared with other models, DSformer achieves closer phase alignment and amplitude consistency with the measured signal, particularly during rapid motion transitions and oscillatory regimes. This qualitative comparison complements the aggregated error metrics by illustrating the temporal fidelity and physical plausibility of the predicted motion trajectories.
Jmse 14 00244 g009
Table 1. Statistics of the three datasets.
Table 1. Statistics of the three datasets.
DatasetsVariatesLengthGranularity
Env019394,9330.01 s
Env029632,7650.01 s
Env039609,8050.01 s
Table 2. Environmental parameters of datasets.
Table 2. Environmental parameters of datasets.
DatasetsWind SpeedWind DirectionWater SpeedTemperatureHumiditySea Conditions
Env012.9 m/s190°0.56 m/s18 °C55%3
Env023.0 m/s59°0.16 m/s17.2 °C51%1
Env038.2 m/s20°0.015 m/s15.8 °C91.7%3
Table 3. The statistical characteristics of the three datasets.
Table 3. The statistical characteristics of the three datasets.
VariatesMeanStd. Deviation
Env01Env02Env03Env01Env02Env03
Yaw Angle165.223184.683160.78395.84390.14890.311
Pitch Angle1.2402.3010.9972.1022.2972.106
Roll Angle0.410.4980.9682.3591.0142.355
Yaw Rate−0.0510.0580.0012.3791.0752.293
Pitch Rate0.0040.0010.0043.4770.9533.464
Roll Rate−0.006−0.001−0.0094.4021.9604.087
Heave Velocity−0.111−0.287−0.0560.4440.4110.347
Sway Velocity0.130.0820.1170.8280.2140.412
Surge Velocity3.4454.1962.4232.5453.2431.953
Table 4. Hyperparameters.
Table 4. Hyperparameters.
ConfigurationValues
OptimizerAdam
Learning rate0.001, 0.002, 0.003
Weight decay0.0001
Batch size64
Learning rate scheduleMultiStepLR
Sampling interval3, 5, 6
Prediction length1000
Historical length3000
Table 5. Prediction performance on Env01.
Table 5. Prediction performance on Env01.
Env01MAEMSEMAPERMSEWAPE
DSformer0.57120.87511.51780.93550.6573
PatchTST0.59370.93351.72140.96620.6872
LightTS0.60240.93531.56320.96710.6969
Nlinear0.60830.95021.61750.97480.6961
Dlinear0.61620.9551.59820.97730.7008
CycleNet0.62521.00431.99351.00210.7174
Corssformer0.64290.99762.28270.99880.7614
iTransformer0.66441.0992.6261.0480.77
SOFTS0.68421.15872.62271.07640.7934
Autoformer0.71011.15282.0991.07370.8216
Informer0.73631.07982.87521.03910.9094
Fredformer0.58930.89711.65620.94720.6793
ETSformer0.721.082.04871.03930.8394
Pyraformer0.80471.1663.29661.07980.9984
Table 6. Prediction performance on Env02.
Table 6. Prediction performance on Env02.
Env02MAEMSEMAPERMSEWAPE
DSformer0.34260.43911.87270.67490.5513
PatchTST0.36610.47662.02330.69560.5849
LightTS0.37310.4552.1630.67450.6386
Nlinear0.370.47372.16070.68830.6474
Dlinear0.36540.46432.15260.68140.6363
CycleNet0.38130.51552.80940.7180.707
Corssformer0.36250.44041.85320.66410.6062
iTransformer0.37930.54522.45770.73840.6766
SOFTS0.3960.5622.59570.74960.7054
Autoformer0.53440.57030.80110.65130.6078
Informer0.560.56020.7032.33710.8385
Fredformer0.34890.45971.81660.64790.5943
ETSformer0.50420.58712.62670.76620.8281
Pyraformer0.84931.29223.62681.13681.5451
Table 7. Prediction performance on Env03.
Table 7. Prediction performance on Env03.
Env03MAEMSEMAPERMSEWAPE
DSformer0.23840.22631.76440.47570.6475
PatchTST0.25640.24541.78630.48630.6657
LightTS0.26950.2481.80670.49870.6831
Nlinear0.25930.25611.83710.50610.6812
Dlinear0.26620.25161.76950.50160.6756
CycleNet0.26560.28612.16530.53490.6939
Corssformer0.27060.24051.76520.48110.6798
iTransformer0.26920.27172.27050.52120.7234
SOFTS0.27770.29042.41020.53890.7415
Autoformer0.36120.29652.43120.55490.8763
Informer0.35440.29882.44280.54660.8647
Fredformer0.24090.22821.68160.47890.6386
ETSformer0.721.082.04871.03930.8394
Pyraformer0.59460.63453.38790.79662.0851
Table 8. The mean training time of each epoch.
Table 8. The mean training time of each epoch.
ModelSpeed (s/epoch)Parameters
DSformer32.972,246,688
PatchTST126.9448,448,616
CorssFormer82.2213,859,632
Autoformer131.68466,826
Informer85.371,115,658
Fredformer68.742,210,536
ETSformer241.315,292,154
Pyraformer36.6827,049,728
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, H.; Li, Y.; Mao, Y.; Li, J.; Chen, Z.; Bai, P.; Peng, X. DSformer for Ship Motion Prediction: A Statistics-Driven Framework with Environment-Adaptive Hyperparameter Tuning. J. Mar. Sci. Eng. 2026, 14, 244. https://doi.org/10.3390/jmse14030244

AMA Style

Ge H, Li Y, Mao Y, Li J, Chen Z, Bai P, Peng X. DSformer for Ship Motion Prediction: A Statistics-Driven Framework with Environment-Adaptive Hyperparameter Tuning. Journal of Marine Science and Engineering. 2026; 14(3):244. https://doi.org/10.3390/jmse14030244

Chicago/Turabian Style

Ge, Haowen, Ying Li, Yuntao Mao, Jian Li, Ziwei Chen, Pengying Bai, and Xueming Peng. 2026. "DSformer for Ship Motion Prediction: A Statistics-Driven Framework with Environment-Adaptive Hyperparameter Tuning" Journal of Marine Science and Engineering 14, no. 3: 244. https://doi.org/10.3390/jmse14030244

APA Style

Ge, H., Li, Y., Mao, Y., Li, J., Chen, Z., Bai, P., & Peng, X. (2026). DSformer for Ship Motion Prediction: A Statistics-Driven Framework with Environment-Adaptive Hyperparameter Tuning. Journal of Marine Science and Engineering, 14(3), 244. https://doi.org/10.3390/jmse14030244

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop