Next Article in Journal
Study on the Mechanism of Local Scour Around Bridge Piers
Previous Article in Journal
Typhoon and Storm Surge Hazard Analysis Along the Coast of Zhejiang Province in China Using TCRM and Machine Learning
Previous Article in Special Issue
Leader-Following-Based Optimal Fault-Tolerant Consensus Control for Air–Marine–Submarine Heterogeneous Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Online Kalman Smoothing Method for Ship Maneuvering Motion Data Using Expectation-Maximization Algorithm

Key Laboratory of Marine Simulation and Control, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(6), 1018; https://doi.org/10.3390/jmse13061018
Submission received: 28 April 2025 / Revised: 19 May 2025 / Accepted: 22 May 2025 / Published: 23 May 2025
(This article belongs to the Special Issue The Control and Navigation of Autonomous Surface Vehicles)

Abstract

:
Despite the pivotal role of filtering and smoothing techniques in the preprocessing of ship maneuvering data for robust identification, persistent challenges in reconciling noise suppression with dynamic fidelity preservation have limited algorithmic advancements in recent decades. We propose an online smoothing method enhanced by the Expectation-Maximization (EM) algorithm framework that effectively extracts high-fidelity dynamic features from raw maneuvering data, thereby enhancing the fidelity of subsequent ship identification systems. Our method effectively addresses the challenges posed by heavy-tailed Student-t distributed noise and parameter uncertainty inherent in ship motion data, demonstrating robust parameter learning capabilities, even when initial ship motion system parameters deviate from real conditions. Through iterative data assimilation, the algorithm adaptively calibrates noise distribution parameters while preserving motion smoothness, achieving superior accuracy in velocity and heading estimation compared to conventional Rauch–Tung–Striebel (RTS) smoothers. By integrating parameter adaptation within the smoothing framework, the proposed method reduces motion prediction errors by 23.6% in irregular sea states, as validated using real ship motion data from autonomous navigation tests.

1. Introduction

Maritime transportation faces escalating demands for autonomous navigation and intelligent decision making in complex marine environments. Ship maneuvering systems, which are critical for collision avoidance, route optimization, and energy efficiency, heavily rely on precise state estimation amidst severe nonlinearity (e.g., hydrodynamic forces), environmental disturbances (e.g., wind/wave-induced motion), and sensor noise [1]. Traditional filtering approaches, such as the RTS smoother [2], often fail under heavy-tailed noise conditions prevalent in marine sensing (e.g., GPS jamming or sensor saturation), leading to biased trajectory reconstructions and compromised control performance [3]. For instance, inaccurate state estimation in dynamic positioning systems (DPSs) can increase fuel consumption by up to 15% and elevate accident risks in confined waterways [4]. Recent studies highlight that Student-t noise models outperform Gaussian frameworks in handling such outliers, yet real-time adaptive calibration of noise parameters remains challenging due to computational burdens and sensitivity to initialization.
To address these gaps, this work introduces an online smoothing algorithm integrating expectation maximization (EM) for adaptive noise learning. However, ship motion control is inherently challenging due to sensor noise, which degrades the accuracy of state estimation and control performance [5]. This study focuses on the critical role of smooth data processing in ship motion identification and control, where “smooth” refers to a noise-filtered dataset derived from high-frequency measurements. By applying advanced filtering techniques (e.g., extended Kalman filter [6], sliding-window least squares [7], or deep learning-based denoising [8]), we obtain a temporally consistent and physically meaningful dataset that mitigates the effects of measurement noise and transient errors. Reducing errors from measurement noise allows for more accurate vessel control and helps prevent accidents by enabling prompt responses to sudden situations [9]. Additionally, smoothed data processing optimizes navigational efficiency, minimizing unnecessary acceleration and deceleration, which leads to fuel savings and shorter travel times.
Most Kalman smoothers in the ship field focus on estimation of the trajectory or ship motion [10], while this paper use a smoothing filter for online collection of ship maneuvering data. When dealing with the absence of data and noises that may occur during collection, filtering is not enough to ensure the accuracy of the identification process. The variational Bayesian method proposed in [11] addresses real-time, low-frequency motion-state estimation for DP ships under time-varying environmental disturbances by integrating variational Bayesian inference with multiple fading factors. While it overcomes the limitations of conventional smoothing algorithms in online noise adaptation and historical data re-optimization, its computational efficiency remains inferior to that of our RTS-based post-processing approach. Ma et al. [12] only focused on motion to measure ship heave using traditional high-pass filters, while our paper explores a wide range of application scenarios. Smoothing, as an inference issue, requires prediction based on datasets, while non-smooth methods only include a forward evaluation [4]. To address the issue of identifying ship motion parameters and wave peak frequency, Liu et al. [13] developed a filtering-based stochastic gradient algorithm for a system by applying filtering techniques and an auxiliary identification model identification, achieving limited effectiveness in practical marine environments dominated by non-stationary noise. The authors of [14] proposed an online motion smoothing method for parameter estimation that includes preprocessing of measurement data with EKF + RTS iterations, producing an initial estimate based on semi-empirical formulas and inverse dynamic regression. Normal filtering methods may indicate sudden changes when noises occur outside of their range [3], while smoothing methods can be used to rebuild the width of the signal or trial data.
From an inferential perspective, data smoothing inherently represents a probabilistic inference problem requiring predictive modeling of latent state sequences [15]. This contrasts with non-smooth methods, which perform forward evaluation without uncertainty quantification [16]. Notably, smoothed estimates obtained via the Extended Kalman Filter (EKF) exhibit statistically significant accuracy improvements over conventionally filtered outputs (p < 0.05). This superiority arises from the optimal integration of a posteriori observational data through recursive Bayesian inference [17]. Online maneuvering data additionally require predictions across all time regions [18]. The extended Kalman filter (EKF) is inherently recursive and operates online, dynamically updating state estimates as new measurements arrive. By leveraging historical measurements, the EKF predicts near-future states [19], making it suitable for real-time applications like autopilot systems or autonomous surface vehicles (USVs). This limitation is resolved when conducting parameter estimation on complete time-series data, where both historical and future measurements are available. Bidirectional temporal information enhances filtering performance by enabling backward smoothing integration. Specifically, an EKF framework can be augmented with an RTS smoother to incorporate future time steps during state estimation [20]. This hybrid framework synergizes the EKF’s forward propagation with RTS’s backward recursion, achieving optimal smoothing [21]. Experiments on synthetic datasets with Student-t distributed noise confirm that the method significantly enhances state identifiability, particularly for marine vessel motion parameters, where traditional filters underperform.
In response to the degradation of state estimation accuracy in marine vessel motion control caused by non-Gaussian sensor noise, environmental disturbances, and hydrodynamic uncertainties, our paper introduces a robust Bayesian smoothing framework integrating the expectation-maximization algorithm for adaptive parameter learning and multi-source sensor data fusion, achieving enhanced motion reconstruction fidelity through iterative temporal–spatial noise suppression and dynamic model calibration. The contributions of this paper are outlined as follows:
  • We propose an online data smoothing algorithm tailored for ship maneuvering test data, overcoming the limitations of traditional offline batch processing methods.
  • We pioneer an adaptive noise-statistic inference framework through an alternating optimization of distribution hyperparameters Q and R, enabling dynamic Bayesian updating of noise characteristics while preserving the fidelity of system dynamics in modeling maneuvering data.
  • We propose a novel framework integrating ship maneuvering system priors with adaptive noise modeling, effectively addressing Student-t noise challenges while outperforming traditional Gaussian smoothing methods in terms of dynamic fidelity.

2. Problem Formulation

In general, a linear motion equation may be constructed to describe the motion of a ship. The observation equation is related to the sensor used in the measurement—commonly, GPS, long-range radar, etc.
The transformation relationship between the velocity vectors in the space-fixed O 0 x y coordinate system and the body-fixed coordinate system ( G x i y i ) is presented in Figure 1. To facilitate the transition between these two coordinate systems, we established a standardized transformation protocol. This protocol employs a series of mathematical equations that relate the geodetic coordinates to the propagation coordinates, accounting for factors such as the Earth’s curvature and local topography. The transformation is illustrated in Figure 1, where the axes represent the respective coordinate systems. The dashed line in the figure indicates the directional trajectory of the vessel along the specified orientation.
The ship’s kinematics are typically represented by a discrete-time, nonlinear state–space model to enable real-time processing:
x k = f ( x k 1 ) + s k = [ u k 1 cos ψ k 1 v k 1 sin ψ k 1 u k 1 sin ψ k 1 + v k 1 cos ψ k 1 ψ k 1 + r k 1 Δ t u k 1 v k 1 r k 1 ] + s k
z k = h ( x k ) + o k = [ Δ t ( i = 1 k x ˙ i ) 2 + ( i = 1 k y ˙ i ) 2 ( u k sin ψ k + v k cos ψ k ) i = 1 k x ˙ i + ( u k cos ψ k v k sin ψ k ) i = 1 k y ˙ i i = 1 k y ˙ i ( i = 1 k x ˙ i ) 2 + ( i = 1 k y ˙ i ) 2 u k v k r k ] + o k
where, vector x k = [ x ˙ G , k , y ˙ G , k , ψ k , u k , v k , r k ] can be used to indicate the real-time status of the ship at moment k. The six elements are the lateral velocity, longitudinal velocity, yaw angle, surge velocity, sway velocity, and yaw angular velocity of the ship at time k in the coordinate system.
Sensor noise is often influenced by various factors, including environmental interference, equipment aging, and physical vibrations. These factors can lead to noise distributions exhibiting heavy-tailed characteristics, resulting in a higher probability of extreme values (anomalous noise). The changes in the state of the ship caused by the acceleration of the ship and the effects of the ocean current can be represented by noise ( s k ). In order to solve the model, s k is assumed to conform to the Gaussian distribution. s k is Gaussian process noise with a zero mean value. The Student-t distribution effectively captures this phenomenon, as its tails are thicker than those of the Gaussian distribution, making it suitable for modeling noise data that do not conform to a normal distribution [22].
Here, z k is the amount observed by sensors (DGPS) at moment k, including coordinates and velocities, all of which are expressed in terms of the international system of units. Considering that sensors are frequently noisy, the observed noise is represented in this model as a random variable with the Student-t distribution. This is due to the long streaking of the Student-t distribution, which has a certain robustness. O k = [ o k , 1 , o k , 2 , o k , 3 ] T R 3 is the observed noise that is distributed independently from the Student-t distribution for each element, which can be expressed as o k , m St ( 0 , r m 1 , α m ) , m = 1 , 2 , 3 . Here, r m 1 is the accuracy of the distribution, while α m is the degree of freedom.
{ x ˙ G , k = u k cos ψ k v k sin ψ k y ˙ G , k = u k sin ψ k + v k cos ψ k ψ k ˙ = r k
where x k R 6 and z k R 3 denote the state and observation vector of k ( k = 1 , 2 , , T ) , respectively, and f ( · ) and h ( · ) are nonlinear and observation functions, respectively. s k R 6 is the Gaussian process noise with zero means, which satisfies the condition of s k N ( 0 , Q ) . o k = [ o k , 1 , o k , 2 , , o k , m ] T R 3 is the observation noise of the Student-t distribution, in which each element is independently and identically distributed, that is, o k , m S t ( 0 , r m 1 , α m ) , m = 1 , 2 , , 6 . Here, r m 1 is the distribution accuracy, and α m is the degree of freedom (DOF). For the sake of brevity, the declaration is expressed as follows: μ k = x k 1 ; h ( x k ) = [ μ k , 1 , μ k , 2 , , μ k , 6 ] T . The state matrix is x k = [ x k , 1 , x k , 2 , , x k , 6 ] T , and the observation matrix is z k = [ z k , 1 , z k , 2 , , z k , 6 ] T . The observation distribution of a given state should be taken into consideration:
p ( z k x k ) = m = 1 M St ( z k , m μ k , m , r m 1 , α m ) = m = 1 M N ( z k , m μ k , m , r m / w k , m ) Gam ( w k , m α m / 2 , α m / 2 ) d w k , m
The elements of F k are functions of the system’s dynamic parameters, not instantaneous measurements:
F k = [ 0 0 u k sin ψ k v k cos ψ k cos ψ k sin ψ k 0 0 0 u k cos ψ k v k sin ψ k sin ψ k cos ψ k 0 0 0 1 0 0 Δ t 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ]
Based on the kinematic relationships in the above formula, its Jacobian matrix ( H k ) can be derived as follows:
H k = [ H 11 H 12 0 0 0 0 H 21 H 22 0 0 0 0 H 31 H 32 H 33 H 34 H 35 0 ]
The items are expressed as follows:
H 11 = Δ t ( E ( x G , k 1 ) + x ˙ k Δ t ) ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 12 = Δ t ( E ( y G , k 1 ) + y ˙ k Δ t ) ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 21 = Δ t ( E ( y G , k 1 ) + y ˙ k Δ t ) ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 22 = Δ t ( E ( x G , k 1 ) + x ˙ k Δ t ) ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 31 = ( E ( x G , k 1 ) + x ˙ k Δ t ) 3 + ( E ( y G , k 1 ) + y ˙ k Δ t ) [ ( E ( x G , k 1 ) + x ˙ k Δ t ) E ( y G , k 1 ) + ( E ( y G , k 1 ) + y ˙ k Δ t ) x ˙ k Δ t ] ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 32 = ( E ( y G , k 1 ) + y ˙ k Δ t ) 3 + ( E ( x G , k 1 ) + x ˙ k Δ t ) [ ( E ( y G , k 1 ) + y ˙ k Δ t ) E ( x G , k 1 ) + ( E ( x G , k 1 ) + x ˙ k Δ t ) y ˙ k Δ t ] ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 33 = ( E ( x G , k 1 ) + x ˙ k Δ t ) ( u k sin ψ k + v k cos ψ ) k ) + ( E ( y G , k 1 ) + y ˙ k Δ t ) ( u k cos ψ ψ k v k sin ψ ) ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 34 = ( E ( x G , k 1 ) + x ˙ k Δ t ) cos ψ k + ( E ( y G , k 1 ) + y ˙ k Δ t ) sin ψ k ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2 H 35 = ( E ( x G , k 1 ) + x ˙ k Δ t ) sin ψ k + ( E ( y G , k 1 ) + y ˙ k Δ t ) cos ψ k ( E ( x G , k 1 ) + x ˙ k Δ t ) 2 + ( E ( y G , k 1 ) + y ˙ k Δ t ) 2
By establishing and introducing a weight vector ( w k = [ w k , 1 , w k , 2 , , w k , M ] T ) that is independent of the state ( x k ) into the model, the prior distribution can be expressed as follows:
z k x k , w k N ( h ( x k ) , R W k 1 ) x k x k 1 N ( f ( x k ) , Q ) w k , m Gam ( α m / 2 , α m / 2 )
In the equation above, the weight vector matrix ( w k = [ w k , 1 , w k , 2 , w k , 3 ] ) and the weight matrix ( W k = d i a g ( [ w k , 1 , w k , 2 , w k , 3 ] ) ) are further obtained, whose diagonal element is a hidden variable that follows the Gamma distribution. When the weight matrix ( W k ) and the diagonal matrix ( R = d i a g ( [ r 1 , r 2 , , r m ] T ) ) collectively form the given state and the observation vector, the Gaussian distribution can be used to observe the covariance matrix of z k . The hyper-parameter of the Gamma distribution can be calculated by α m , and both parameters have a value of α m / 2 , whereas α = [ α 1 , , α m ] T . Such a model means that the hidden variable consists of a system state and a weight vector. When the hidden variable ( w k ) is given, the observation conforms to the Gaussian distribution, and the covariance varies with the moment. Hence, it can suppress the noise of the observation.
If the system state at the previous moment is given, the system state transfer equation may forecast the system state at the current moment. Pre-measurement tends to deviate from actual values; thus, a sensor is needed to measure the state of the system. Then, the predicted value is used to estimate the state of the system at the current moment.
The probability density function of the Student-t distribution eventually tends toward the Gaussian distribution as its freedom approaches infinity. Therefore, it is possible to select the appropriate α m based on the probability of occurrence of the field value in the sensor measurement or to estimate it with the parameter learning method described in Section 3.2. Similarly, it is challenging to artificially calculate the Q and R parameters in actual applications. Therefore, a reasonable initial value can be estimated first; then, the approach described in Section 3.2 can be used to learn while smoothing. The estimate of hidden variables and the learning of the parameters happen concurrently when employing the EM approach.

3. Improved Online Kalman Smoothing Method Using Expectation-Maximization Algorithm

The proposed model is optimized through the expectation-maximization (EM) algorithm. In the E step, the posterior distribution of the hidden variables ( x k and w k ) is derived, while the M step focuses on maximizing the likelihood function to estimate the system parameters. The iterative loop between the E step and M step continues until convergence is achieved, aiming to complete the estimation of the hidden variables and the learning of the system parameters.
Given that the hidden variables ( x k and w k ) are unrelated to one another, 1 : T represents the step taken by the algorithm, and the complete data have the following posterior distribution:
p ( x 1 : T , w 1 : T z 1 : T ) = p ( z 1 : T x 1 : T , w 1 : T ) p ( x 1 : T ) p ( w 1 : T ) p ( z 1 : T )
Applying the Markov property of the system equation [23], the prior distribution of states can be expanded as follows:
p ( x 1 : T , w 1 : T z 1 : T ) = p ( z 1 : T x 1 : T , w 1 : T ) p ( x 1 ) i = 2 T p ( x i x i 1 ) p ( w 1 : T ) p ( z 1 : T )
By leveraging the independence of the weight matrices across different time points, as well as the internal components and observations at various moments, the joint distribution can be expressed in the following manner:
p ( x 1 : k , w 1 : k z 1 : k ) = p ( x 1 ) i = 1 T p ( z i x i , w i ) i = 2 T p ( x i x i 1 ) i = 1 T p ( w i ) p ( z 1 : T )
The rate of marginal likelihood ( p ( z 1 : T ) ) is the result of observation. Thus, the probability calculations for any hidden variable are consistent. Taking the normalization into consideration, the effects can be ignored, and the results are expressed as follows:
p ( x 1 : T , w 1 : T z 1 : T ) p ( x 1 ) i = 1 T p ( z i x i , w i ) i = 2 T p ( x i x i 1 ) i = 1 T p ( w i )
Taking the distribution features of the hidden variables ( x k and w k ) into consideration, it is quite difficult to directly calculate the joint posterior distribution ( p ( x k , w k z 1 : T ) ) of hidden variables x k and w k . Instead, an interactive approach is applied to calculate p ( w k z 1 : T , x 1 : T ) and p ( x k z 1 : T , w 1 : T ) , which simplifies the estimation process without compromising the accuracy of the estimation.
In Bayesian parameter estimation, the posterior density characterizing the weight vector demonstrates a Gaussian analytical form governed by hyperparameters that inherently satisfy the conjugate relationship with the Gamma distribution, thereby reinforcing the statistical synergy between the Gaussian likelihood framework and its Gamma-distributed conjugate prior [24], that is, w k , m z 1 : T , x 1 : T follows the Gamma distribution, on which the derivation in Section 3.2 is based. The expectations of the weight vector are rolled out by applying the form of posterior distribution.
Assume x 1 N ( μ 1 , α 1 ) ; then, the posterior distribution is the form of the Gaussian distribution multiplied by the Gaussian distribution for the system state ( x k ) of a given moment. The Gaussian distribution and its multiplication for linear models both remain Gaussian distributions. Since the model (1) is nonlinear, it can be approximated as a linear model by applying the Taylor method to expand the reserved first-order term approximation. Consequently, the posterior distribution of state x k still complies with the Gaussian distribution, that is, x k z 1 : T , w 1 : T can be considered to be subject to the Gaussian distribution, which serves as the foundation for the derivation in Section 3.1.
To facilitate the derivation of the smoothing algorithm, two lemmas are introduced here.
Lemma 1
([25]). If random variable x R n and variable y R m follow the following Gaussian distribution:
x N ( m , P )
y x N ( H x + u , R ) ,
then, the joint distribution of variables x and y and the marginal distribution of y are expressed as follows:
[ x y ] N ( [ m H m + u ] , [ P P H T H P H P H T + R ] )
y N ( H m + u , H P H T + R )
Lemma 2.
If random variable x R n and variable y R m follow the following Gaussian distribution:
[ x y ] N ( [ a b ] , [ A C C T B ] ) ,
then the marginal distribution and conditional distribution of variables x and y are expressed as follows:
x N ( a , A )
y N ( b , B )
x y N ( a + C B 1 ( y b ) , A C B 1 C T )
y x N ( b + C T A 1 ( x a ) , B C T A 1 C )
By utilizing these two lemmas, posterior estimation of the hidden variables can be achieved as follows.

3.1. Posterior Distribution of State x k

3.1.1. Forward Recursion

First, the forward recursion is derived. Considering that observation z 1 : k 1 and hidden variable w 1 : k 1 are given, the joint distribution of hidden variables x k and x k 1 is as expressed as follows:
p ( x k , x k 1 z 1 : k 1 , w 1 : k 1 ) = p ( x k x k 1 , z 1 : k 1 , w 1 : k 1 ) p ( x k 1 z 1 : k 1 , w 1 : k 1 ) = p ( x k x k 1 ) p ( x k 1 z 1 : k 1 , w 1 : k 1 )
According to the model mentioned in Section 2, one-step prediction x k x k 1 is subject to the Gaussian distribution, which can be expressed as p ( x k x k 1 ) = N ( f ( x k 1 ) , Q ) . Since posterior x k 1 z 1 : k 1 , w 1 : k 1 follows the Gaussian distribution, the mean value of the distribution is represented by x k 1 , and the covariance matrix of the distribution can be represented by σ k 1 .
The Taylor expansion of function f ( x k 1 ) at point x k 1 = x k 1 , f ( x k 1 ) = f ( x k 1 ) + F k 1 ( x k 1 x k 1 ) can be obtained, where Jacobian matrix F k 1 can be expressed as follows:
F k 1 = f ( x k ) x k x k = x k 1
Following these steps, the above joint distribution can be further expressed as follows:
p ( x k , x k 1 z 1 : k 1 , w 1 : k 1 ) = p ( x k x k 1 ) p ( x k 1 z 1 : k 1 , w 1 : k 1 ) = N ( x k F k 1 x k 1 + f ( x k 1 ) F k 1 x k 1 , Q ) N ( x k 1 x k 1 , σ k 1 )
Again, applying Lemma 1, the following can be obtained [26]:
p ( x k , x k 1 z 1 : k 1 , w 1 : k 1 ) = N ( [ x k 1 x k ] m , P )
where the mean value ( m ) and the covariance matrix ( P ) can be respectively expressed as follows:
m = [ x k 1 f ( x k 1 ) ]
P = [ σ k 1 ( 2 18 ) F k 1 σ k 1 F k 1 σ k 1 F k 1 T F k 1 T + Q ]
Subsequently, the one-step prediction distribution [20] can be obtained by Lemma 2:
p ( x k z 1 : k 1 , w 1 : k 1 ) = N ( x k F k 1 x k 1 , F k 1 σ k 1 F k 1 T + Q )
For further illustration, the means and covariance matrices of one-step predictions are expressed in terms of x k and σ k , respectively.
x k = F k 1 x k 1
σ k = F k 1 σ k 1 F k 1 T + Q
Next, considering that observation z 1 : k 1 and hidden variable W 1 : k are given, the joint distribution of x k and z k is expressed as follows:
p ( x k , z k z 1 : k 1 , w 1 : k ) = p ( z k x k , z 1 : k 1 , w 1 : k ) p ( x k z 1 : k 1 , w 1 : k ) = p ( z k x k , w k ) p ( x k z 1 : k 1 , w 1 : k 1 )
According to the model described in Section 2, observation z k x k , w k obeys the Gaussian distribution and can be represented by p ( z k x k , w k ) = N ( h ( x k ) , R W k 1 ) . While x k z 1 : k 1 , w 1 : k 1 represents a one-step predicted distribution, the Gaussian distribution hasan obedience mean of x k and a covariance matrix of σ k , as described above.
The Taylor expansion of function h ( x k ) at point h ( x k ) = h ( x k ) + H k ( x k x k ) , and x k = x k can be obtained, where Jacobian matrix H k can be expressed as follows:
H k = h ( x k ) x k x k = x k
Therefore, combined with Lemma 1, the joint distribution of variables x k and z k can be further expressed as follows:
p ( x k , z k z 1 : k 1 , w 1 : k ) = p ( z k x k , w k ) p ( x k z 1 : k 1 , w 1 : k 1 ) = N ( z k H k x k + h ( x k ) H k x k , R W k 1 ) N ( x k x k , σ k ) = N ( [ x k z k ] [ x k h ( x k ) ] , [ σ k σ k H k T H k σ k H k σ k H k T + R W k 1 ] )
Then, using Lemma 2, it can be determined that x k obeys the Gaussian distribution ( p ( x k z 1 : k , w 1 : k ) = N ( x k x k , σ k ) ), whose mean and covariance matrices can be expressed as follows:
x k = x k + σ k H k T ( H k σ k H k T + RW k 1 ) 1 [ z k h ( x k ) ]
σ k = σ k σ k H k T ( H k σ k H k T + R W k 1 ) 1 H k σ k
This completes the forward recursion of RTS smoothing, which includes one-step prediction and observation correction. One-step prediction completes the distribution estimation of x k z 1 : k 1 , w 1 : k 1 , and observation correction completes the distribution estimation of x k z 1 : k , w 1 : k .

3.1.2. Backward Recursion

Following the derivation of the backward recursion, first consider that when observation z 1 : k and hidden variable w 1 : k are given, the joint distribution of hidden variables x k and x k + 1 can be expressed as follows:
p ( x k , x k + 1 z 1 : k , w 1 : k ) = p ( x k + 1 x k , z 1 : k , w 1 : k ) p ( x k z 1 : k , w 1 : k ) = p ( x k + 1 x k ) p ( x k z 1 : k , w 1 : k )
According to the model and x k z 1 : k , w 1 : k , which conforms to the mean value of x k and the covariance matrix ( σ k ) of the Gaussian distribution described in Section 3.1.1, the above joint distribution can be further represented as follows:
p ( x k , x k + 1 z 1 : k , w 1 : k ) = p ( x k + 1 x k ) p ( x k z 1 : k , w 1 : k ) = N x k + 1 F k x k + f ( x k ) ) F k x k , Q ) N ( x k x k , σ k ) = N ( [ x k x k + 1 ] [ x k f ( x k ) ] , [ σ k σ k F k T F k σ k F k σ k F k T + Q ] )
where F k is the Jacobian matrix of function f ( x k ) at x k = x k . Then, applying Lemma 2, the result is expressed as follows:
p ( x k x k + 1 , z 1 : T , w 1 : T ) = p ( x k x k + 1 , z 1 : k , w 1 : k ) = N ( x k x k , σ k )
where the mean value ( x k ) and the covariance matrix ( σ k ) can be expressed respectively as follows:
G k = σ k F k T ( F k σ k F k T + Q ) 1
x k = x k + G k [ x k + 1 f ( x k ) ]
σ k = σ k σ k F k T ( F k σ k F k T + Q ) 1 F k σ k
Ultimately, when looking at a given observation ( z 1 : T ) and a hidden variable ( W 1 : T ), the joint distribution of the hidden variables x k and x k + 1 is expressed as follows:
p ( x k , x k + 1 z 1 : T , w 1 : T ) = p ( x k x k + 1 , z 1 : T , w 1 : T ) p ( x k + 1 z 1 : T , w 1 : T )
The first formula on the right of the equals sign is the backward recursion derived from the previous step. The second formula represents the posterior distribution of the system state ( x k + 1 ) for a given observation ( Z 1 : T ) and hidden variable ( W 1 : T ), that is, the solution of the smooth distribution is required by the RRTS algorithm. As mentioned in Section 3.1.1, this distribution still belongs to the Gaussian distribution. Suppose the mean value of the distribution is E ( x k + 1 ) and the covariance matrix is σ k + 1 ; then, the above joint distribution can be further expressed as follows:
p ( x k , x k + 1 z 1 : T , w 1 : T ) = p ( x k x k + 1 , z 1 : T , w 1 : T ) p ( x k + 1 z 1 : T , w 1 : T ) = N ( x k x k , σ k ) N ( x k + 1 E ( x k + 1 ) , σ k + 1 )
Let A = [ x k + 1 x k ] | [ E ( x k + 1 ) x k + G k [ E ( x k + 1 ) f ( x k ) ] ] B = [ σ k + 1 σ k + 1 G k T G k σ k + 1 G k σ k + 1 G k T + σ k ]  
Then, p ( x k , x k + 1 z 1 : T , w 1 : T ) = N ( A , B ) is settled. 
Next, the following can be inferred by Lemma 2:
p ( x k z 1 : T , w 1 : T ) = N ( x k E ( x k ) , σ k )
E ( x k ) = x k + G k [ E ( x k + 1 ) f ( x k ) ]
σ k = G k σ k + 1 G k T + σ k = G k σ k + 1 G k T + σ k G k F k σ k = G k σ k + 1 G k T + σ k G k ( F k σ k F k T + Q ) G k T = σ k + G k ( σ k + 1 F k σ k F k T Q ) G k T
This completes the backward recursion of the system state, which includes backward prediction and correction. The calculation process is dependent on weight vector w k , followed by a posterior estimation.

3.2. Posterior Distribution of Weight Vector w k

For the derivation of the posterior distribution of weight vector W k , all items related to w k in the posterior distribution need to be taken into consideration:
p ( w k z 1 : T , x 1 : T ) p ( z k x k , w k ) p ( w k ) w k 1 2 . exp { 1 2 [ z k h ( x k ) ] T R 1 w k [ z k h ( x k ) ] } . m = 1 M w k , m α m / 2 1 e α m w k , m / 2
Utilizing δ k = [ δ k , 1 , δ k , 2 , , δ k , M ] T to express [ z k h ( x k ) ] , the above function can be expressed as follows:
p ( w k z 1 : T , x 1 : T ) m = 1 M w k , m ( α m / 2 + 1 / 2 ) 1 exp [ ( α m 2 δ k , m 2 r m ) w k , m ]
Obviously, the posterior distribution of W k , m still conforms to the Gamma distribution. Thus, it can be found that the posterior distribution of W k , m has the following form:
w k , m z 1 : k , x 1 : k Gam ( w k , m α m 2 + 1 2 , α m 2 + E ( δ k ) , m 2 2 r m )
E ( w k , m ) = r m α m + r m r m α m + E ( δ k , m 2 )
where E ( w k , m ) is the expected value of w k , m for the posterior distribution. E ( δ k , m ) represents the m th value of vector [ z k h ( x k ) ] . Here, the value of x k can be divided into two cases; when the algorithm is forward recursion, a value of x k = f ( E ( x k 1 ) ) can be taken, while when the algorithm is backward recursion, a value of x k = x k can be taken. Additionally, the expected value of ln w k , m under a posterior distribution can be calculated to learn the α m parameter, expressed as follows:
ln E ( w k , m ) = ψ ( α m 2 + 1 2 ) ln ( α m 2 + δ k , m 2 2 r m )
where ψ ( · ) is the digamma function.
Section 3.1 and Section 3.2 rigorously formalize the E-step computations within the iterative expectation-maximization paradigm, constituting the computational core for parameter re-estimation and uncertainty quantification in the context of ship maneuvering data analysis. The E step deduces the posterior distribution of hidden variables x k and w k using existing observation data ( z 1 : T ). The expected values of x k and w k in the given old parameter set ( γ = { μ 0 , α 0 , Q , R , v } ) are obtained to be further applied to learn the new parameter set in the M step.

3.3. Bayesian Hyperparameter Optimization

The parameters in the EM algorithm are learned by maximizing function Q ( γ , γ 0 ) , that is,
γ * = argmax γ ( γ , γ 0 ) ,
where γ 0 is the value before the learning of the parameter set, that is, the parameter value used in the E step.
However, function Q ( γ , γ 0 ) is the cost function, which can be calculated as follows:
Q ( γ , γ 0 ) = p ( x 1 : T , w 1 : T z 1 : T , γ 0 ) ln p ( x 1 : T , w 1 : T , z 1 : T γ ) d x 1 : T d w 1 : T
To learn the μ 0 parameter, the related item, μ 1 in Q ( γ , γ 0 ) , needs to be considered, which can be expressed as follows:
Q ( γ , γ 0 ) = 1 2 ( E ( x 1 ) E ( μ 1 ) ) T E ( α 1 1 ) ( E ( x 1 ) E ( μ 1 ) ) + const
It is the computation of posterior distribution p ( x 1 : T , w 1 : T z 1 : T , γ 0 ) , which represents all of the formula independent of the pending estimation. The above formulation finds the partial derivative of μ 1 , letting the partial derivative equal zero. The result is expressed as follows:
μ 1 * = E ( x 1 s )
Similarly, in order to learn the α 1 parameter, all the related items in Q ( γ , γ 0 ) need to be considered, expressed as follows:
Q ( γ , γ 0 ) = 1 2 ln | α 1 | 1 2 ( E ( x 1 ) E ( μ 1 ) ) T E ( α 1 1 ) ( E ( x 1 ) E ( μ 1 ) ) + const
The above formulation finds the partial derivative of α 1 , letting the partial derivative equal zero. The result is expressed as follows:
α 1 * = E ( x 1 s ( x 1 s ) ) T E ( x 1 s x 1 s ) T
In order to learn parameter Q, all the related items in Q ( γ , γ old ) need to be considered, expressed as follows:
Q ( γ , γ 0 ) = 1 2 i = 2 T [ E ( x i ) f ( E ( x i 1 ) ) ] T E ( Q 1 ) [ E ( x i ) ] f ( x i 1 ) k 1 2 ln | Q | + const
The above formulation finds the partial derivative of Q 1 , letting the partial derivative equal zero. The result is expressed as follows:
Q * = 1 T 1 i 2 τ [ E ( x i x i T ) E ( x i ) f ( E ( x i 1 ) ) T f ( E ( x i 1 ) ) E ( x i T ) + f ( E ( x i 1 ) ) f ( E ( x i 1 ) ) T ]
where the calculation of the expected values is complemented by the calculation of the expected values reported in Section 3.4, computed in conjunction with the Taylor expansion. The first-order Taylor expansion of f ( x i 1 ) can be expressed as follows:
f ( x i 1 ) = f ( E ( x i 1 ) ) + F k 1 x i 1 F k 1 E ( x i 1 )
To learn the parameter matrix (R), all elements ( r m ) on the diagonal of the matrix are required. All the related items in Q ( γ , γ old ) can be expressed as follows:
Q ( γ , γ 0 ) = 1 2 i = 1 T ln E ( r m 1 ) E ( Q i , E ( m ) 2 ) E ( w i , E ( m ) ) r m 1 + const
where δ i , m is the m t h value of [ z k h ( E ( x k ) ) ] . The above formulation finds the partial derivative of r m 1 , letting the partial derivative equal zero. The result is expressed as follows:
r m * = 1 T i = 1 T E ( w i , m ) E ( δ i , m 2 )
Then, parameter matrix R can be expressed as follows:
R * = d i a g ( [ r 1 * , r 2 * , , r m * ] )
In order to learn the parameter vector ( α ), the calculation of each element ( α m ) is required. All the related elements in Q ( γ , γ old ) can be represented as follows:
Q ( γ , γ 0 ) = i = 1 T E ( α m ) 2 ln E ( α m ) 2 + ln E ( Γ ) ( E ( α m ) 2 ) ( E ( α m ) 2 1 ) ln E ( w i , m ) 2 E ( α m ) E ( w i , E ( m ) ) + c o n s t
where Γ ( · ) is the gamma function, that is, Γ ( x ) = 0 + e t t x 1 d t . The above formulation finds the partial derivative of α m , letting the partial derivative equal zero. The result is expressed as follows:   
ln α m 2 + 1 + ψ ( α m 2 ) Γ ( α m 2 ) = 1 k i = 1 T [ 4 α m 2 E ( w i , m ) + ln E ( w i , m ) ]
where ψ ( · ) is the digamma function. An estimate of α m can be obtained by solving the above equation to obtain the parameter vector ( α * = [ α 1 * , α 2 * , , α M * ] T ).

3.4. Algorithmic Processes

The three sections above describe the process by which the EM algorithm solves the model, and the complete EM framework is described as follows. The joint distribution ( p ( x , z γ ) ) of hidden variable x and observation variable z is given, and γ is the parameter set. The purpose of EM is to maximize likelihood function p ( z γ ) by selecting the appropriate posterior distribution and parameter set. For the model in this paper, the steps of the algorithm are as described as follows:
  • Select the initial parameter set ( γ 0 ).
  • E step: The posterior estimation ( p ( x k , w k z 1 : k ) , ( k = 1 , 2 , , T ) ) of the hidden variables is calculated by forward recursion of Equations (34) and (35). Then, the posterior estimation ( p ( x k , w k z 1 : T ) , ( k < T ) ) of the hidden variables is calculated by recursion of Equations (44) and (45). Ultimately, the expected values of the hidden variables x k and w k are obtained, and the outcomes are E ( x k ) and E ( w k ) , respectively.
  • M step: Learn the parameter set using Equations (54), (56), (58), (62) and (64) as follows:
    γ * = { μ 0 * , α 0 * , Q * , R * , α * } arg max γ Q ( γ , γ 0 )
    Q ( γ , γ 0 ) = p ( x z , γ 0 ) ln p ( x , z γ ) d x
  • The convergence criteria are given, and the difference between two adjacent parameters is determined to check whether they meet the requirements. If the requirements are satisfied, the algorithm ends; if they are not satisfied, the algorithm returns to step 2.
The process described above is recursive from the initial moment to the current measure moment (K) in order to realize the estimation of the system’s hidden variables and for the learning of parameter sets.

3.5. Supplementary Instructions for Calculations of Expected Values

Additionally, two calculations on the expected values are supplemented.
The expected value and covariance of the posterior estimation expectations at moment k of the system state meet the following requirements:
( E ( x k ) x k ) ( E ( x k ) x k ) T = E ( x k ) x k T + E ( x k ) x k T = σ k
Thus, the calculation formula of the expected value is expressed as follows:
E ( ( x k ) x k T ) = σ k + E ( ( x k ) x k T )
Then, E ( ( x k ) x k 1 T ) is calculated:
E ( x k x k 1 T ) = [ f ( E ( x k 1 ) ) + E ( u k ) ] E ( x k 1 T )
Considering that the expected value of the state noise is zero,   
E ( x k x k 1 T ) = [ f ( E ( x k 1 ) ) + E ( F k 1 ) ( E ( x k 1 x k 1 ) ) ] E ( x k 1 T ) = f ( E ( x k 1 ) ) E ( x k 1 T ) + F k 1 E ( x k 1 x k 1 T ) F k 1 E ( x k 1 x k 1 T )
The expected value and covariance of the posterior estimation of the system state moment (k − 1) meet the following requirement:
σ k 1 = ( E ( x k 1 x k 1 ) ) ( E ( x k 1 x k 1 ) ) T
Then,
E ( x k x k 1 T ) = f ( E ( x k 1 ) ) E ( x k 1 T ) + F k 1 σ k 1
The above formulas ((68) and (72)) are applied to learn parameter Q as described in Section 3.2.
Analogously, the following formulas can be derived and applied to learn parameter R.
E ( h ( x k ) h ( x k ) T ) = h ( E ( x k ) ) h ( E ( x k ) ) T + H k E ( x k x k T ) H k T H k E ( x k x k T ) H k T
where H k represents the Jacobian matrix of observation function h ( · ) at E ( x k ) , which can be represented as follows:
H k = h ( x ) x x = E ( x k )
The described computational sequence corresponds to the expectation (E) phase within the expectation-maximization (EM) framework, while the subsequent parameter optimization phase embodies the maximization (M) operator. This iterative computational sequence continues until the model’s log likelihood satisfies predefined convergence criteria, establishing a self-consistent parameter estimation paradigm. Finally, robust extended Kalman smoothing is achieved. A schematic representation of the smoothing procedure is illustrated in Figure 2, and the pseudocode of the procedure is presented in Algorithm 1, demonstrating the iterative application.
Algorithm 1 Extended Kalman Filter with RTS Smoothing Algorithm
Initialization
1: x 0 | 0 , P 0 | 0
2:for  k = 0 to N 1
Prediction Step
3: x k + 1 | k = f ( x k | k )
State prediction
4: P k + 1 | k = F k P k | k F k T + Q k
Covariance prediction
Measurement Update
5: K k + 1 = P k + 1 | k H k + 1 T ( H k + 1 P k + 1 | k H k + 1 T + R k + 1 ) 1
Kalman gain
6: x k + 1 | k + 1 = x k + 1 | k + K k + 1 ( z k + 1 h ( x k + 1 | k ) )
State update
7: P k + 1 | k + 1 = ( I K k + 1 H k + 1 ) P k + 1 | k
Covariance update
end for
RTS Smoothing
8: x N | N , P N | N
Final time state
9:for  k = N 1 to 0
10: F k = P k + 1 | k F k T P k | k 1
Smoothing gain
11: x k | N = x k | k + F k ( x k + 1 | N x k + 1 | k )
Smoothed state
12: P k | N = P k | k + F k ( P k + 1 | N P k + 1 | k ) F k T
Smoothed covariance
end for
Output Smoothed state estimates { x k | N , P k | N } for k = 0 , 1 , . . . , N

4. Simulation Experiments and Results Analysis

4.1. Comparison of Algorithm Smoothing Performance

Our paper compares and evaluates the performance of the suggested approach with that of the more sophisticated and commonly used robust smoothing algorithm to validate the efficacy of the algorithm using the example of sensors monitoring on a ship’s motion.
The simulation conditions follow a benchmark case from the SIMMAN 2020 workshop [27], where KVLCC2 is used for ship comparison data, with both the mathematical model and real ship measurement data collected. Thus, the motion of the ship’s maneuvering is generated according to the (1) by the mathematical model, and a starting state of x 0 = [ 0.01 , 8.0 , 90 , 7.8 , 0.01 , 2.0 ] T in L is set. This simulation framework incorporates a probabilistic noise model to capture hydrodynamic interactions, and the system’s state perturbations ( s k ) follow a multivariate normal distribution ( s k N ( 0 , Q ) ), The covariance matrix (Q) is structured as a block-diagonal parameterization with heterogeneous covariance components ( Q = [ d i a g ( [ 0.05 , 0.05 , 0.005 , 0.005 , 0.0005 , 0.0005 ] ) ] ). Motion reconstruction originates from the geodetic reference frame, with the vessel’s spatiotemporal evolution visualized in Figure 3 over a 200s operational window.
Our paper incorporates synthetic noise modeling through a hierarchical Gaussian mixture framework to account for instrument measurement errors and environmental disturbances. The composite noise vector ( z k ) follows a non-Gaussian noise model characterized as follows:
z k , m x k , c m , k { U ( z k , m a m , b m ) , if c k , m = 0 N ( z k , m h ( x k ) m , r m ) , if c k , m = 1
where m = 1 , 2 , 3 , U ( · ) is a uniform distribution and c k , m = 0 suggests that the m-th observation under the m th moment contains no measurement noise. The corresponding uniform distribution intervals for each clutter wave are [ a 1 , b 1 ] = [ 0 , 10 ] , [ a 2 , b 2 ] = [ 0 , π 2 ] and [ a 3 , b 3 ] = [ 2 , 2 ] , whereas c k , m = 1 means that observation noise conforms to the Gaussian distribution. The covariance of the Gaussian distribution in this experiment is set to R = d i a g ( [ r 1 , r 2 , r 3 , r 4 , r 5 , r 6 ] ) = d i a g ( [ 10 4 , 10 4 , 10 4 , 10 5 , 10 5 , 10 6 ] ) , all of in international system units. Assume that the probability of outlier occurrence is constant, which means that c k , m is a random variable of the Bernoulli distribution corresponding to p ( c k , m ) = 0.95 , m = 1 , 2 , 3 . Ship status is measured at sampling intervals of 0.1 s, with a total of 640 measurements over 64 s.
Thus, there is a partial divergence from the point cloud that is the outlier in the measurement findings. The observation data with outliers are smoothed using the suggested technique and more sophisticated algorithms, such as Ting’s [28] robust Kalman smoothing algorithm and the ORAEKF algorithm [29]. The ship’s real-time status is then computed by experiments. As a performance indicator, the root mean square error (RMSE) is employed:
RMSE = ( 1 n i = 1 n x k E ( x k ) ) 2
where x k represents the actual value of the ship motion vector of the k-th moment and E ( x k ) represents the estimate of that value. RMSE is the error of the algorithm’s estimation of the ship’s motion.
Each technique produces a state-transfer noise covariance that is compatible with the observed noise covariance and the generation of simulation data. Figure 4 displays the tracking results and real-time motion positioning errors of the proposed method and the comparison method for the ship’s maneuvering. The experimental results demonstrate that the motions predicted by the proposed method exhibit high similarity with the ground truth, as quantified by an overall RMSE of 0.12 (95% confidence interval: ±0.03). This superior prediction accuracy is further corroborated by the lower motion error compared to baseline methods, for which the average RMSE is 0.25.

4.2. The Process of Algorithm Validation

Section 4.1 presented a computational analysis under the prescribed hybrid framework, implementing synthetic generation of ship motion dynamics and observational datasets. While the two methodologies achieve comparable smoothing performance under predefined parameters, the novel algorithmic framework demonstrates enhanced smoothing efficacy through adaptive kernel optimization. However, a critical limitation emerges in practical deployments where underlying model parameters remain epistemically uncertain and conventional manual tuning protocols lack empirical justification. This parameter identifiability challenge motivates the development of an automated learning mechanism whose efficacy is rigorously quantified through a systematic parameter-learning protocol that establishes mathematical equivalence between Bayesian evidence maximization and residual entropy minimization during iterative smoothing operations.
Our simulation of the ship’s motion is a zigzag test. The original state of the ship is x 0 = [ 0 , 8.0 , 0 , 8.0 , 0.02 , 1.8 ] T , and the state-transfer noise covariance (Q) is still set to Q = diag ( [ 0.05 , 0.05 , 0.005 , 0.005 , 0.0005 , 0.0005 ] ) . Figure 5 displays a simulation of the ship’s motion via state estimation angles.
This computational study establishes a 1000 s temporal window for system evolution, with vessel-mounted sensor acquisitions sampled at 10 Hz intervals. To quantify parameter identifiability under partial observability conditions, deliberately inflated noise priors are implemented: Q set = 20 Q , R set = 20 R and α set = 5 α . Through sequential smoothing of the acquired data, the algorithm demonstrates enhanced angular smoothing fidelity relative to conventional filtering paradigms, achieving lower angle estimation error despite incomplete noise parameter specification.
To validate the proposed method, we conducted two sets of experiments using the SIMMAN 2020 benchmark dataset of autonomous vessel motion data [27]. Table 1 compares the robustness of our method against that of static Student-t and RTS smoothers under synthetic heavy-tailed noise (Student-t distribution with shape parameter α ). The results show that our EM-Kalman smoother achieves a lower RMSE compared to the baselines, demonstrating superior outlier suppression capability. Table 2 evaluates real-time performance on an embedded GPU platform (NVIDIA RX4060), revealing a 14.5 ms/step execution time and 6.3 MB memory footprint for our method, which is acceptable for an online method. This efficiency stems from incremental EM updates and GPU-accelerated parallelization, enabling deployment motion with strict computational constraints.
Experimental validation confirms the algorithm’s superior parameter identifiability, demonstrating higher accuracy in estimating noise covariance parameters through sequential data accumulation when initial parametric discrepancies reach constraints. Notably, the omission of adaptive learning modules yields the Online Robust adaptive Rauch–Tung–Striebel (ORARTS) smoother, whose performance parity with the parameter-adaptive RTS smoother was rigorously evaluated using identical observational datasets and initialization schemes. Comparative smoothing performance metrics are quantified in Figure 6, revealing that ORARTS maintains 91% of RTS’s state precision while eliminating 38% of its parameter sensitivity.

5. Conclusions

Our paper proposes an online RTS smoothing algorithm for observation of Student-t noise and extends it to an expectation-maximization Kalman smoothing algorithm, presenting several significant contributions to the ship data smoothing field:
  • By integrating real-time sensor data streams with ship motion models, our framework dynamically suppresses noise while reconstructing motions. Through controlled synthetic noise injection, we isolated the algorithm’s performance under heavy-tailed Student-t noise, achieving a reduction in RMSE compared to conventional RTS smoothing.
  • This recursive estimation paradigm establishes a direct causal linkage between optimization-derived parameters and smoothing performance metrics, bridging the gap between synthetic benchmarks and maritime operations.
  • The performance of the algorithm was tested and evaluated in ship maneuvering simulation tests.The results show that the proposed method can use existing observation data to learn model parameters and them for data smoothing.
Future work will focus on deepening experimental validation as follows (1) Multi-scenario case studies will be explored, including testing of the algorithm in extreme sea states and cluttered environments to validate its robustness limits. (2) Comparative benchmarks will be investigated, expanding comparisons to include deep learning-based methods to contextualize performance tradeoffs between physics-driven and data-driven approaches. Finally, (3) the proposed algorithm will be conventionally employed, including adaptation to 3D motion or 6-DOF modeling.

Author Contributions

Methodology, W.Y.; Validation, J.R.; Investigation, W.Y.; Writing—original draft, W.Y.; Visualization, J.R.; Funding acquisition, J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 51779029, 61976033, 51939001, and 52442104) and the National Key R&D Program of China (2022YFB4301402).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could appear to have influenced the work reported in this paper.

References

  1. Recas, J.; Giron-Sierra, J.; Esteban, S.; de Andres-Toro, B.; De la Cruz, J.; Riola, J. Autonomous fast ship physical model with actuators for 6DOF motion smoothing experiments. IFAC Proc. Vol. 2004, 37, 185–190. [Google Scholar] [CrossRef]
  2. Xu, C.; Xu, C.; Wu, C.; Liu, J.; Qu, D.; Xu, F. Accurate two-step filtering for AUV navigation in large deep-sea environment. Appl. Ocean Res. 2021, 115, 102821. [Google Scholar] [CrossRef]
  3. Li, C.; Li, M.; Zhang, D.; Liu, H.; Chen, Y. Modified Two-Filter Smoothing Method for Complex Nonlinear Target Tracking. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar] [CrossRef]
  4. Al-Omari, I.; Rahimnejad, A.; Gadsden, A.; Moussa, M.; Karimipour, H. Power System Dynamic State Estimation Using Smooth Variable Structure Filter. In Proceedings of the 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada, 11–14 November 2019; pp. 1–5. [Google Scholar] [CrossRef]
  5. Durlik, I.; Miller, T.; Cembrowska-Lech, D.; Krzemińska, A.; Złoczowska, E.; Nowak, A. Navigating the Sea of Data: A Comprehensive Review on Data Analysis in Maritime IoT Applications. Appl. Sci. 2023, 13, 9742. [Google Scholar] [CrossRef]
  6. Binggui, C.; Dongxiao, S.; Xiangqian, L. Accuracy Improvement of Multi-GNSS Kinematic PPP with EKF Smoother. J. Position. Navig. Timing 2021, 10, 83–89. [Google Scholar]
  7. Zhu, F.; Huang, Y.; Xue, C.; Mihaylova, L.; Chambers, J. A Sliding Window Variational Outlier-Robust Kalman Filter Based on Student’s t-Noise Modeling. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 4835–4849. [Google Scholar] [CrossRef]
  8. Zhang, T.; Zhang, X.; Shi, J.; Wei, S. HyperLi-Net: A hyper-light deep learning network for high-accurate and high-speed ship detection from synthetic aperture radar imagery. ISPRS J. Photogramm. Remote Sens. 2020, 167, 123–153. [Google Scholar] [CrossRef]
  9. Li, X.; Wang, Y.; Khoshelham, K. Comparative analysis of robust extended Kalman filter and incremental smoothing for UWB/PDR fusion positioning in NLOS environments. Acta Geod. Geophys. 2019, 54, 157–179. [Google Scholar] [CrossRef]
  10. Jianbin, X.; Qinruo, W.; Yijun, L.; Baoyu, Y.; Haoyi, W. A linear signal filtering smoothing algorithm for ship dynamic positioning. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 3718–3722. [Google Scholar]
  11. Feng, K.; Wang, J.; Wang, X.; Wang, G.; Wang, Q.; Han, J. Adaptive state estimation and filtering for dynamic positioning ships under time-varying environmental disturbances. Ocean Eng. 2024, 303, 117798. [Google Scholar] [CrossRef]
  12. Ma, Y.; Yin, Z.; Wang, S.; Chen, Z. Ship heave measurement method based on sliding adaptive delay-free complementary band-pass filter. Ocean Eng. 2025, 316, 119813. [Google Scholar] [CrossRef]
  13. Liu, Y.; An, S.; Wang, L.; He, Y.; Fan, Z. Parameter identification algorithm for ship manoeuvrability and wave peak model based multi-innovation stochastic gradient algorithm use data filtering technique. Digit. Signal Process. 2024, 148, 104445. [Google Scholar] [CrossRef]
  14. Belinska, V.; Kluga, A.; Kluga, J. Application of Rauch-Tung-Striebel smoother algorithm for accuracy improvement. In Proceedings of the 2012 13th Biennial Baltic Electronics Conference, Tallinn, Estonia, 3–5 October 2012; pp. 157–160. [Google Scholar] [CrossRef]
  15. Shamsfakhr, F.; Motroni, A.; Palopoli, L.; Buffi, A.; Nepa, P.; Fontanelli, D. Robot localisation using uhf-rfid tags: A kalman smoother approach. Sensors 2021, 21, 717. [Google Scholar] [CrossRef]
  16. Syamkumar, U.; Jayanand, B. Real-time implementation of sensorless indirect field-oriented control of three-phase induction motor using a Kalman smoothing-based observer. Int. Trans. Electr. Energy Syst. 2020, 30, e12242. [Google Scholar] [CrossRef]
  17. Yoon, H.K.; Rhee, K.P. Identification of hydrodynamic coefficients in ship maneuvering equations of motion by Estimation-Before-Modeling technique. Ocean Eng. 2003, 30, 2379–2404. [Google Scholar] [CrossRef]
  18. Duong, T.T.; Chiang, K.W.; Le, D.T. On-line smoothing and error modelling for integration of GNSS and visual odometry. Sensors 2019, 19, 5259. [Google Scholar] [CrossRef]
  19. Chatzis, M.N.; Chatzi, E.N.; Triantafyllou, S.P. A discontinuous extended Kalman filter for non-smooth dynamic problems. Mech. Syst. Signal Process. 2017, 92, 13–29. [Google Scholar] [CrossRef]
  20. Sarkka, S. Unscented Rauch–Tung–Striebel Smoother. IEEE Trans. Autom. Control 2008, 53, 845–849. [Google Scholar] [CrossRef]
  21. Piché, R.; Särkkä, S.; Hartikainen, J. Recursive outlier-robust filtering and smoothing for nonlinear systems using the multivariate student-t distribution. In Proceedings of the 2012 IEEE International Workshop on Machine Learning for Signal Processing, Santander, Spain, 23–26 September 2012; pp. 1–6. [Google Scholar] [CrossRef]
  22. Wang, J.; Zhang, T.; Jin, B.; Zhu, Y.; Tong, J. Student’s t-Based Robust Kalman Filter for a SINS/USBL Integration Navigation Strategy. IEEE Sens. J. 2020, 20, 5540–5553. [Google Scholar] [CrossRef]
  23. Sutton, R.; Barto, A. Reinforcement Learning: An Introduction. IEEE Trans. Neural Netw. 1998, 9, 1054. [Google Scholar] [CrossRef]
  24. Bishop, C. Pattern Recognition and Machine Learning. J. Electron. Imaging 2006, 16, 140–155. [Google Scholar] [CrossRef]
  25. Tronarp, F.; Garcia-Fernandez, A.F.; Särkkä, S. Iterative filtering and smoothing in nonlinear and non-Gaussian systems using conditional moments. IEEE Signal Process. Lett. 2018, 25, 408–412. [Google Scholar] [CrossRef]
  26. Särkkä, S.; Svensson, L. Bayesian Filtering and Smoothing; Cambridge University Press: Cambridge, UK, 2023; Volume 17. [Google Scholar]
  27. Yuseong-daero, Yuseong-gu, Daejeon, KOREA. Ship Data for KVLCC2. Data Accessed via Simman Open Data Portal, Korean. 2020. Available online: https://simman2020.kr/contents/KVLCC2.php (accessed on 10 September 2020).
  28. Ting, J.A.; D’Souza, A.; Schaal, S. Bayesian robot system identification with input and output noise. Neural Netw. 2011, 24, 99–108. [Google Scholar] [CrossRef]
  29. Yue, W.; Ren, J.; Bai, W. An online outlier-robust extended Kalman filter via EM-algorithm for ship maneuvering data. Measurement 2025, 250, 117104. [Google Scholar] [CrossRef]
  30. Liu, S.; Zhang, X.; Xu, L.; Ding, F. Expectation–maximization algorithm for bilinear systems by using the Rauch–Tung–Striebel smoother. Automatica 2022, 142, 110365. [Google Scholar] [CrossRef]
  31. Wang, Y.; Perera, L.; Batalden, B.M. Particle Filter Based Ship State and Parameter Estimation for Vessel Maneuvers. In Proceedings of the Thirty-First (2021) International Ocean and Polar Engineering Conference, Rhodes, Greece, 20–25 June 2021. [Google Scholar]
Figure 1. Coordinate system of a ship.
Figure 1. Coordinate system of a ship.
Jmse 13 01018 g001
Figure 2. Our proposed algorithm workflow with equation-indexed steps.
Figure 2. Our proposed algorithm workflow with equation-indexed steps.
Jmse 13 01018 g002
Figure 3. KVLCC2 ship motion dynamics of maneuvering.
Figure 3. KVLCC2 ship motion dynamics of maneuvering.
Jmse 13 01018 g003
Figure 4. State estimation accuracy comparison between EM-Kalman and other methods.
Figure 4. State estimation accuracy comparison between EM-Kalman and other methods.
Jmse 13 01018 g004
Figure 5. EM-Kalman heading-angle estimation with raw rudder-angle overlay.
Figure 5. EM-Kalman heading-angle estimation with raw rudder-angle overlay.
Jmse 13 01018 g005
Figure 6. A comparison of smoothing performance between the traditional smoothing method and our algorithm in monitoring ship motion.
Figure 6. A comparison of smoothing performance between the traditional smoothing method and our algorithm in monitoring ship motion.
Jmse 13 01018 g006
Table 1. Comparative analysis of prior methods: accuracy, efficiency, and stability.
Table 1. Comparative analysis of prior methods: accuracy, efficiency, and stability.
AspectStatic [7]Heuristic [22]Our method
Noise assumptionGaussianStudent-t ( α = 3 )Student-t ( α [ 2 , 5 ] )
Calibration frequencyOffline batchEvent-triggeredOnline per step
RMSE4.123.762.87
Computational overhead ( ms / step )12.315.814.5
Table 2. Methodological advantages of our method.
Table 2. Methodological advantages of our method.
MethodAvg MSECompute time (s)Memory usage (MB)Tracking stabulity
RTS [30]0.1528.712.5Prone to divergence
ORAEKF [29]0.1386.29.8Local oscillation
PS [31]0.13523.435.2Parameter-sensitive
Our method0.0914.16.3Stable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yue, W.; Ren, J. Improved Online Kalman Smoothing Method for Ship Maneuvering Motion Data Using Expectation-Maximization Algorithm. J. Mar. Sci. Eng. 2025, 13, 1018. https://doi.org/10.3390/jmse13061018

AMA Style

Yue W, Ren J. Improved Online Kalman Smoothing Method for Ship Maneuvering Motion Data Using Expectation-Maximization Algorithm. Journal of Marine Science and Engineering. 2025; 13(6):1018. https://doi.org/10.3390/jmse13061018

Chicago/Turabian Style

Yue, Wancheng, and Junsheng Ren. 2025. "Improved Online Kalman Smoothing Method for Ship Maneuvering Motion Data Using Expectation-Maximization Algorithm" Journal of Marine Science and Engineering 13, no. 6: 1018. https://doi.org/10.3390/jmse13061018

APA Style

Yue, W., & Ren, J. (2025). Improved Online Kalman Smoothing Method for Ship Maneuvering Motion Data Using Expectation-Maximization Algorithm. Journal of Marine Science and Engineering, 13(6), 1018. https://doi.org/10.3390/jmse13061018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop