Next Article in Journal
Exploring the Use of High-Impact Political and Economic Statements in LLM for Judging Financial Market Trend—A Technical Indicator-Based Approach
Previous Article in Journal
On the Use of the Meshless Material Point Method for Microelectronic Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels

1
School of Marine Engineering, Dalian Maritime University, Dalian 116026, China
2
Faculty of Computing, Hong Kong Polytechnic University, Hong Kong, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(5), 867; https://doi.org/10.3390/math14050867
Submission received: 2 February 2026 / Revised: 28 February 2026 / Accepted: 2 March 2026 / Published: 4 March 2026

Abstract

This paper develops an extended-state-observer (ESO)-enhanced actor–critic reinforcement learning (RL) scheme for the trajectory tracking control of 3-DOF marine vessels subject to uncertain hydrodynamics and environmental disturbances. A coordinate-consistent error construction is provided to obtain an exact strict-feedback second-order uncertain template. On this basis, an Hamilton–Jacobi–Bellman (HJB)-inspired optimised control structure is implemented: the critic approximates the optimal value-gradient and the actor generates the optimised control law. A key simplification is employed: rather than minimising the squared Bellman residual via complex gradients, we introduce an HJB-inspired actor–critic consistency regularisation through a weight-matching coupling. This yields computationally light online update laws and enables transparent Lyapunov-based stability analysis while not claiming exact HJB satisfaction or policy optimality. The ESO estimates lumped uncertainty and provides feedforward compensation, so the RL module learns only the observer residual. A composite Lyapunov analysis establishes the semi-global uniform ultimate boundedness of tracking errors and boundedness of all observer signals. Practical implementation with thruster allocation, explicit wind–wave–current disturbance shaping filters, and a theory-aligned ablation protocol are provided for reproducibility.

1. Introduction

The accurate trajectory tracking of marine surface vessels is a foundational capability for autonomous navigation, offshore operations, and safety-critical station-keeping tasks [1,2]. In practical deployments, tracking performance is challenged by a combination of (i) unmodelled and time-varying hydrodynamic effects, (ii) stochastic environmental loads induced by wind, waves, and currents, and (iii) strict actuation constraints arising from limited thruster authority and allocation feasibility. These factors can jointly induce large tracking errors, severe input saturation, and even qualitative trajectory distortion when the control law and the physical actuation model are not formulated in a coordinate-consistent manner. Thus, purely model-based designs may suffer from performance degradation when the nominal model is inaccurate, whereas aggressive robust designs may achieve stability at the expense of excessive control effort.
From a modelling perspective, the planar 3-degree-of-freedom (3-DOF) vessel dynamics constitute a coupled multi-input multi-output (MIMO) nonlinear system in which the kinematics are naturally expressed in the inertial frame while the generalised forces and moment are applied in the body-fixed frame. This inherent frame mismatch complicates both controller synthesis and theoretical analysis [3]. A common approach is to design tracking controllers via backstepping or strict-feedback transformations, which can provide systematic stability proofs when the system is cast into a suitable cascade form. However, when environmental disturbances and unmodeled dynamics are significant, purely model-based designs often become conservative to preserve robustness, or require increasingly complex adaptation laws [4]. Moreover, the presence of actuator bounds and allocation constraints can invalidate unconstrained stability arguments, since saturation introduces additional nonlinearities and may render a reference trajectory infeasible [5]. A prominent model-light route to robustness is active disturbance rejection control (ADRC), whose core component is the extended state observer (ESO) that estimates the total disturbance—including unmodeled dynamics and external perturbations—in real time [6,7,8]. Systematic tuning principles based on scaling and bandwidth parameterisation further popularised ADRC in engineering practice by explicitly linking observer and controller bandwidth to disturbance rejection and noise sensitivity [9,10]. In marine robotics, ESO-type disturbance rejection has been adopted for USV trajectory tracking to mitigate ocean environmental loads and parameter uncertainty, and recent studies have demonstrated its practicality in simulation and field experiments [11]. Nevertheless, ESO-based tracking designs are typically not optimal in an explicit sense; moreover, under hard input bounds, tuning purely for disturbance rejection may increase peak thrust demand and saturation rate, which can slow down recovery and distort the intended path.
Learning-based control methods, particularly reinforcement learning (RL), offer a complementary direction by improving performance through online policy optimisation [12,13]. Actor–critic structures are especially attractive for continuous-control problems, as they can approximate optimal policies and value functions without requiring an exact analytic solution of the Hamilton–Jacobi–Bellman (HJB) equation [14]. Nevertheless, vanilla actor–critic schemes may exhibit policy drift, sensitivity to stochastic disturbances, and degraded transient behavior when deployed on systems with severe uncertainty and hard actuation limits. In marine applications, these issues are amplified by persistent environmental loads and the coupling between translational and rotational motions [15]. Therefore, a key open problem is how to integrate learning with robust disturbance rejection in a way that (a) preserves coordinate consistency between theory and implementation, (b) remains stable under wind–wave–current disturbances, and (c) is feasible under realistic actuator constraints. RL—particularly, adaptive dynamic programming (ADP) and actor–critic structures—offers a complementary route by approximating the value function and policy to circumvent the analytic solution of the HJB equation. For continuous-time nonlinear systems, actor–critic learning connects directly to optimal control through policy iteration, temporal-difference learning, and Hamiltonian residual minimisation, enabling online performance improvement without requiring an explicit solution of the HJB partial differential equation [16]. In the marine domain, deep RL has been explored for path-following and trajectory tracking under uncertainty [17], and very recent work has started to incorporate input saturation characteristics into RL-based USV optimal tracking formulations [18]. However, pure RL controllers can be sensitive to unmodeled dynamics and external disturbances, and stability guarantees often become delicate when exploration, approximation errors, and actuator constraints coexist. Recent studies in other networked and cyber-physical domains also indicate a growing trend toward hybrid frameworks that combine structured optimisation components with learning-based adaptation for dynamic multi-objective decision-making [19]. This motivates a principled integration of disturbance observers with RL so that learning takes place around a robust baseline and focuses on optimal refinement rather than disturbance cancellation.
To highlight the motivation, we briefly compare this work with representative related approaches. Model-based robust tracking controllers provide stability guarantees but may become conservative under strong uncertainty and persistent disturbances, especially when actuator constraints are active. ESO- or ADRC-based designs offer practical disturbance rejection by estimating lumped uncertainties, but they do not explicitly address learning-oriented performance refinement. In contrast, many DRL frameworks in other domains such as multi-agent DRL and multi-DNN DRL for MEC or SD-WAN optimisation are primarily designed for slot-based decision problems, where DRL serves as the main decision engine for resource allocation and does not aim to provide Lyapunov-style closed-loop stability guarantees for continuous-time nonlinear dynamics. The present manuscript targets safety-critical continuous-time marine motion control and adopts a hybrid decomposition philosophy, while the ESO-based controller provides the stabilising robust baseline and the actor–critic component is incorporated as a Lyapunov-guided consistency regularisation and secondary online refinement mechanism. This comparison clarifies the research gap and motivates the proposed ESO-enhanced actor–critic framework. Motivated by these challenges, this paper develops a unified control framework that couples an ESO with an actor–critic RL module for the constrained trajectory tracking of 3-DOF vessels. The ESO is used to estimate and compensate lumped uncertainties, including wind–wave–current loads and unmodeled dynamics, thereby providing a robust baseline that stabilises the learning process. On top of this robustification, an actor–critic component refines the control policy toward improved performance such as reduced tracking error and control energy. To strengthen the mathematical consistency of learning, we introduce an HJB-consistency mechanism based on a computable residual metric and a weight-matching coupling between the actor and critic updates. This coupling aligns the learning dynamics with the optimality structure and mitigates uncontrolled weight growth and oscillatory behavior. Importantly, the overall design is formulated in a coordinate-consistent strict-feedback tracking structure, explicitly mapping inertial-frame tracking errors to physically applied body-fixed forces and moment, and incorporating actuator saturation and optional thruster allocation constraints to ensure feasibility.
The proposed framework is validated on a standard 3-DOF vessel benchmark under stochastic wind–wave–current disturbances. Beyond reporting tracking trajectories, we adopt reproducible evaluation metrics that capture both control performance and learning consistency, such as integrated tracking error, control energy, saturation rate, HJB residual, and actor–critic weight mismatch norms. In addition, systematic ablation studies are provided to quantify the complementary roles of robust disturbance estimation and learning-based optimisation.
The main methodological contributions of this paper are summarised as follows:
  • We integrate ESO-based lumped-disturbance estimation with actor–critic policy improvement within a single closed-loop architecture suitable for 3-DOF vessel tracking. Compared with ESO-only and disturbance-sensitive RL-only designs, the proposed integration typically yields smaller tracking errors with lower control effort under environmental loads [20,21].
  • Compared with standard TD-error-driven actor–critic schemes commonly used in the RL literature [22,23], the proposed weight-matching coupling is introduced as a Lyapunov-guided consistency regularisation within a control-oriented framework, which helps to suppress parameter drift and improves robustness under disturbances and actuator constraints.
  • While most RL controllers operate as black boxes, this work guarantees SGUUB stability with a tracking error bound explicitly tied to the ESO residual. This provides a transparent mechanism for performance tuning linking observer bandwidth directly to control precision, which offers reliability that standard heuristic RL methods lack.
The remainder of the paper is organised as follows. Section 2 presents the vessel model, disturbance representation, and coordinate-consistent strict-feedback tracking error system. Section 3 develops the ESO design, the actor–critic learning law with HJB-consistency, and the overall constrained control synthesis. It provides the closed-loop stability analysis and establishes boundedness and ultimate tracking performance guarantees. Section 4 reports simulation results and ablation studies under wind–wave–current disturbances and actuator limits. Finally, Section 5 concludes the paper and outlines future research directions.

2. Vessel Model and Tracking Objective

Consider 3-DOF vessel dynamics [24]
η ˙ = R ( ψ ) ν ,
M ( ν ) ν ˙ + C ( ν ) ν + D ( ν ) ν = τ + d ( t ) ,
where M ( ν ) 0 is the inertia matrix, C ( ν ) is the coriolis term, D ( ν ) is the damping term, and d ( t ) collects wind, wave, current, and unmodelled effects.
The control objective is designed as follows: given a smooth desired trajectory η d ( t ) with bounded derivatives, design a control input τ such that the tracking errors converge to a small neighbourhood of zero, while maintaining reasonable control effort and robustness against uncertainty and disturbances.
Define the earth-fixed tracking error
η 1 η η d .
Define the earth-fixed strict-feedback velocity error
η 2 R ( ψ ) ν η ˙ d .
Then, the strict-feedback first equation holds exactly:
η ˙ 1 = η 2 .
Differentiate (4), then one has
η ˙ 2 = R ˙ ( ψ ) ν + R ( ψ ) ν ˙ η ¨ d = R ˙ ( ψ ) ν + R ( ψ ) M 1 ( ν ) τ + d C ( ν ) ν D ( ν ) ν η ¨ d .
Choose a constant nominal inertia matrix M 0 0 and define a nominal feedforward term τ f f ( t ) :
τ f f ( t ) C 0 ( ν d ) ν d + D 0 ( ν d ) ν d + M 0 ν ˙ d ,
where ( C 0 , D 0 ) are nominal models. Implement
τ = M 0 u + τ f f ( t ) ,
with design input u R 3 . Substituting (8) into (6) yields
η ˙ 2 = u + f ( χ , t ) ,
where χ collects measurable signals and the lumped uncertainty is
f ( χ , t ) R ( ψ ) M 1 ( ν ) M 0 I u input channel mismatch + R ˙ ( ψ ) ν rotation - induced coupling + R ( ψ ) M 1 ( ν ) τ f f C ( ν ) ν D ( ν ) ν + d ( t ) η ¨ d model / ff / env mismatch .
Remark 1. 
The lumped uncertainty f ( χ , t ) in Equation (10) contains an input-coupled mismatch term that is linear in u. To clarify well-posedness, we decompose
f ( χ , t ) = Δ u ( χ ) u + f ¯ ( χ , t ) ,
where
Δ u ( χ ) = R ( ψ ) M 1 ( ν ) M 0 I ,
and f ¯ ( χ , t ) collects the remaining rotation-induced coupling, model mismatch, and environmental disturbance terms. In implementation, the control input u is computed explicitly from measured and estimated signals and then mapped to the physical body frame with actuator saturation. Hence, the closed loop is a well-defined nonlinear ODE with saturation rather than an algebraic loop. We assume that I + Δ u ( χ ) is uniformly nonsingular in the operating region, which is a mild small-gain-type condition ensuring the existence and uniqueness of solutions and preserving ESO convergence under the input-coupled uncertainty.
Thus, the second-order uncertain template is
η ˙ 1 = η 2 , η ˙ 2 = u + f ( χ , t ) .
Remark 2. 
The strict-feedback template in (13) is introduced for synthesis on actor–critic learning and observer-based compensation. In implementation, the virtual input u is mapped back to the physical body-frame force and moment through the kinematics and thruster allocation, so actuator constraints can be enforced without modifying the learning law.
Assumption 1. 
The lumped disturbance d ( t ) is bounded and has a bounded derivative, i.e., d ( t ) d ¯ and d ˙ ( t ) d ¯ 1 .
Assumption 2. 
The reference trajectory η d ( t ) is twice continuously differentiable and bounded, together with its derivatives, i.e., η d ( t ) , η ˙ d ( t ) , and η ¨ d ( t ) are bounded.
Assumption 3. 
The regressor signal S ( η e ) satisfies a finite excitation condition over a sliding window. There exist T w > 0 and γ > 0 such that
t T w t S ( η e ) S ( η e ) d τ γ I t T w .
This assumption is used to strengthen mismatch convergence statements such as W ^ a W ^ c 0 .
Define the infinite-horizon quadratic performance index
J ( η e ( 0 ) ) = 0 η e T ( σ ) η e ( σ ) + u T ( σ ) u ( σ ) d σ ,
where η e = [ η 1 T , η 2 T ] T R 6 .
The quadratic form in (14) reflects a standard tracking-to-regulation conversion: the stacked error state η e is penalised to enforce accurate tracking, while u is penalised to avoid excessive virtual control action. This optimal-control viewpoint provides a principled way to couple learning with a stabilising baseline in the presence of persistent environmental disturbances.
Let J * ( η e ) be the optimal value function. The Hamiltonian is
H ( η e , u , J * ) η e T η e + u T u + d J * d η 1 T η ˙ 1 + d J * d η 2 T η ˙ 2 .
At the optimum, the HJB equation implies H ( η e , u , J * ) = 0 and the optimal input satisfies the stationarity condition with respect to u. Since J * is unknown for the vessel tracking problem with uncertainty, we approximate it via a critic and compute an approximately optimal policy via actor.
Using (13), the stationary condition H / u = 0 yields
u * = 1 2 d J * d η 2 .
Treat f ( χ , t ) in (13) as an extended state:
z 1 = η 2 , z 2 = f .
Then, z ˙ 1 = u + z 2 , z ˙ 2 = f ˙ . A second-order ESO is chosen as
z ^ ˙ 1 = u + z ^ 2 + β 1 ( z 1 z ^ 1 ) ,
z ^ ˙ 2 = β 2 ( z 1 z ^ 1 ) ,
with parameters β 1 , β 2 > 0 .
Use the compensated input
u = u ^ * ( η e ) f ^ , f ^ z ^ 2 .
Then, one obtains
η ˙ 1 = η 2 , η ˙ 2 = u ^ * ( η e ) + Δ f , Δ f f f ^ .
Let S ( η e ) R m be a bounded basis-function vector. Define critic and actor weights
W ^ c ( t ) R m × 3 , W ^ a ( t ) R m × 3 ,
so that W ^ c T S ( η e ) R 3 and W ^ a T S ( η e ) R 3 .
Choose the critic approximation of the value-gradient
d J ^ * d η 2 = 2 Γ 1 η 1 + 2 Γ 2 η 2 + W ^ c T S ( η e ) ,
where Γ 1 , Γ 2 R 3 × 3 are diagonal positive definite matrices.
Implement optimised nominal input
u ^ * ( η ) = Γ 1 η 1 Γ 2 η 2 1 2 W ^ a T S ( η e ) .
Use online updates
W ^ ˙ c = k c S ( η e ) S T ( η e ) W ^ c ,
W ^ ˙ a = S ( η e ) S T ( η e ) k a ( W ^ a W ^ c ) + k c W ^ c ,
with design parameter k a > 0 , k c > 0 .
Remark 3. 
The update laws in Equations (23) and (24) are designed as a Lyapunov-guided actor–critic adaptation mechanism for the proposed ESO-based robust-control framework, rather than a standard TD Bellman-residual-gradient deep reinforcement learning update. This control-oriented design is adopted to maintain closed-loop boundedness and stable parameter adaptation under disturbances and actuator constraints. In this context, the actor–critic coupling term is used as a consistency regularisation term, instead of being interpreted as a standalone Bellman-optimal learning rule.
Using the HJB equation and its approximation, we define an optimality-related residual as
ε ( t ) H ( η e , u , J ^ * ) .
In the present control-oriented design, this residual is introduced as a diagnostic indicator to interpret the subsequent stationarity and weight-matching construction under the adopted approximation. Since Equations (21) and (22) are designed as Lyapunov-guided consistency regularisation rather than TD-error-driven learning, we do not claim that ε ( t ) is explicitly minimised or driven to zero by the update law.
Impose
H ( η e , u , J ^ * ) W ^ a = 1 2 S ( η e ) S T ( η e ) W ^ a W ^ c = 0 .
Remark 4. 
The stationarity condition H / W ^ a = 0 is imposed as a tractable surrogate consistency condition for the parameterised policy with respect to the approximate Hamiltonian constructed using J ^ * . It provides a convenient direction to promote actor critic parameter consistency and facilitate Lyapunov analysis. It should be emphasised that this stationarity condition for the approximate Hamiltonian does not imply the pointwise satisfaction of the true HJB equation H ( η e , u * , J * ) = 0 nor guarantee global optimality. In the present design, the coupled actor–critic updates in Equations (21) and (22) are adopted for Lyapunov-compatible online adaptation within the ESO-based robust control framework.

3. Stability Analysis

Assumption 4. 
The basis functions S ( η e ) used in the actor–critic parameterisation are generated by Gaussian RBFs with fixed centers and widths; hence, S ( η e ) is uniformly bounded in the operating region and there exists a constant S ¯ such that | S ( η e ) | S ¯ for all η in this region. Moreover, the actor and critic weights are assumed to evolve in compact sets and remain bounded, that is, there exist constants W ¯ a and W ¯ c such that | W ^ a ( t ) | W ¯ a and | W ^ c ( t ) | W ¯ c for all t. This boundedness is consistent with the coupled update structure and can also be enforced by standard projection-based constrained adaptation when needed.
This section establishes the boundedness of the overall ESO–actor–critic closed loop. The analysis proceeds in three steps: (i) show that the ESO yields a uniformly ultimately bounded estimation (UUB) error for the lumped uncertainty, (ii) show that the critic and actor weight dynamics remain bounded under the chosen update laws, and (iii) combine these properties into a composite Lyapunov argument that produces an explicit ultimate tracking-error bound. Denote errors e 1 = z 1 z ^ 1 , e 2 = z 2 z ^ 2 = Δ f . Then,
e ˙ 1 = e 2 β 1 e 1 ,
e ˙ 2 = f ˙ β 2 e 1 .
Lemma 1. 
Under Assumptions 1, 2 and 4 and under bounded closed-loop signals, there exists a constant f ¯ 1 such that | f ˙ ( χ , t ) | f ¯ 1 for all t.
This follows because f ( χ , t ) depends on bounded kinematic and dynamic states, bounded disturbances, and bounded control input, and its time derivative depends on bounded derivatives of these signals. In particular, the control input is bounded due to actuator saturation and the basis functions have bounded values in the operating region; hence, all terms contributing to f ˙ remain bounded, which yields the stated bound.
Let Lyapunov function candidate
V e s o = 1 2 e 1 2 + 1 2 β 2 e 2 2 .
Differentiating gives
V ˙ e s o = β 1 e 1 2 + 1 β 2 e 2 T f ˙ β 1 e 1 2 + 1 β 2 e 2 f ¯ 1 β 1 e 1 2 + ϵ f 2 β 2 e 2 2 + 1 2 ϵ f β 2 f ¯ 1 2 ,
for any ϵ f > 0 . Hence, ( e 1 , e 2 ) are UUB. In particular, there exist finite constants Δ ¯ f and T 0 such that Δ f ( t )   =   e 2 ( t )   Δ ¯ f for all t T 0 .
Next, consider the parameter estimation dynamics induced by the coupled actor–critic updates. By standard arguments for gradient-type updates with a shared regressor S ( η e ) , the actor weight error W ˜ satisfies W ˜ ˙ = k a S S T W ˜ in the idealised approximation setting, which immediately implies a non-increasing quadratic storage function. Define W ˜ W ^ a W ^ c and
P ( t ) = Tr ( W ˜ T W ˜ ) = W ˜ F 2 .
Specifically, we have
P ˙ = 2 k a S T W ˜ F 2 0 .
Also,
d d t Tr ( W ^ c T W ^ c ) = 2 k c Tr ( W ^ c T S S T W ^ c ) 0 ,
so W ^ c is bounded and non-increasing in norm. Therefore, W ^ a = W ˜ + W ^ c is bounded:
W ^ a ( t ) W ¯ a < , t 0 .
Lemma 2. 
Define W ˜ W ^ a W ^ c and P ( t ) = Tr ( W ˜ W ˜ ) = W ˜ F 2 . Under the coupled updates (21) and (22), the mismatch dynamics satisfies W ˜ ˙ = k a S ( η e ) S ( η e ) W ˜ , and hence P ˙ ( t ) satisfies (32). Therefore, P ( t ) is non-increasing and W ˜ ( t ) is bounded for all t 0 .
Remark 5. 
Lemma 1 shows that Equation (24), together with (21) and (22), enforces a monotonic decrease in the actor–critic mismatch energy P(t). This promotes parameter consistency under the adopted approximation and serves as a Lyapunov-guided consistency regularisation. However, W ^ a W ^ c alone does not imply a pointwise satisfaction of the HJB equation H ( η , u * , J * ) = 0 nor exact policy optimality. Hence, the stationarity and weight-matching construction are interpreted as an HJB-inspired surrogate rather than a proof of exact HJB optimality.
If the finite excitation condition in Assumption 3 holds, then Proposition 1 further implies W a W c converges to zero; otherwise, only boundedness and mismatch energy dissipation are guaranteed.
Proposition 1. 
Assume that there exist T w > 0 and γ > 0 such that
t T w t S ( η ( τ ) ) S ( η ( τ ) ) d τ γ I t T w
Then, lim t W ^ a ( t ) W ^ c ( t ) F = 0 and hence W ^ a ( t ) W ^ c ( t ) .
Choose a scalar α ( 0 , 1 ) and define
V e 1 2 η 1 2 + α η 1 T η 2 + 1 2 η 2 2 .
Using | η 1 T η 2 |   1 2 ( η 1 2 + η 2 2 ) , we obtain the global quadratic bounds
1 α 2 η 1 2 + η 2 2 V e 1 + α 2 η 1 2 + η 2 2 .
Now, define the full Lyapunov candidate
V V e + 1 2 k a P + V e s o .
Remark 6. 
The coupling parameter α ( 0 , 1 ) is introduced to shape the quadratic bounds of V e and to enlarge the feasibility margin when the ESO and learning transients coexist. In practice, a smaller α yields a more conservative but more robust bound, while a larger α can improve transient response at the expense of reduced robustness margin.
It is positive definite and radially unbounded in ( η 1 , η 2 , W ˜ , e 1 , e 2 ) because V e is quadratically bounded by (35).
Differentiate (34), which yields
V ˙ e = η 1 T η ˙ 1 + α ( η ˙ 1 T η 2 + η 1 T η ˙ 2 ) + η 2 T η ˙ 2 = η 1 T η 2 + α ( η 2 T η 2 + η 1 T η ˙ 2 ) + η 2 T η ˙ 2 = η 1 T η 2 + α η 2 2 + ( α η 1 + η 2 ) T Γ 1 η 1 Γ 2 η 2 1 2 W ^ a T S + Δ f = α η 1 T Γ 1 η 1 η 2 T Γ 2 η 2 η 2 T Γ 1 η 1 α η 1 T Γ 2 η 2 + α η 2 2 1 2 ( α η 1 + η 2 ) T W ^ a T S + ( α η 1 + η 2 ) T Δ f .
Use eigenvalue bounds:
η 1 T Γ 1 η 1 λ min ( Γ 1 ) η 1 2 , η 2 T Γ 2 η 2 λ min ( Γ 2 ) η 2 2 .
For cross terms, apply Young inequalities with tunable scalars ε 12 , ε 21 > 0 :  
| η 2 T Γ 1 η 1 |   Γ 1 η 2 η 1   ε 21 2 η 2 2 + Γ 1 2 2 ε 21 η 1 2 ,
| α η 1 T Γ 2 η 2 | α Γ 2 η 1 η 2   ε 12 2 η 2 2 + α 2 Γ 2 2 2 ε 12 η 1 2 .
Substituting (38) and (39) into (37) yields
V ˙ e α λ min ( Γ 1 ) Γ 1 2 2 ε 21 α 2 Γ 2 2 2 ε 12 η 1 2 λ min ( Γ 2 ) α ε 21 2 ε 12 2 η 2 2 1 2 ( α η 1 + η 2 ) T W ^ a T S + ( α η 1 + η 2 ) T Δ f .
Define
c 1 α λ min ( Γ 1 ) Γ 1 2 2 ε 21 α 2 Γ 2 2 2 ε 12 , c 2 λ min ( Γ 2 ) α ε 21 2 ε 12 2 .
Choose Γ 1 , Γ 2 and ε 12 , ε 21 such that c 1 > 0 and c 2 > 0 .
Using Assumption 4 and (33) for any epsilon greater than zero, it follows that, for any ϵ a > 0 ,
1 2 ( α η 1 + η 2 ) T W ^ a T S 1 2 α η 1 + η 2 W ^ a S 1 2 ( α η 1 + η 2 ) W ¯ a S ¯ ϵ a 2 η 2 2 + ϵ a 2 α 2 η 1 2 + W ¯ a 2 S ¯ 2 2 ϵ a .
For any ϵ d > 0 , one obtains
( α η 1 + η 2 ) T Δ f α η 1 + η 2 Δ f ( α η 1 + η 2 ) Δ f ϵ d 2 η 2 2 + 1 2 ϵ d Δ f 2 + ϵ d 2 α 2 η 1 2 + 1 2 ϵ d Δ f 2 = ϵ d 2 η 2 2 + ϵ d 2 α 2 η 1 2 + 1 ϵ d Δ f 2 .
Substitute (42) and (43) into (40):
V ˙ e c 1 ϵ a 2 α 2 ϵ d 2 α 2 η 1 2 c 2 ϵ a 2 ϵ d 2 η 2 2 + W ¯ a 2 S ¯ 2 2 ϵ a ρ a + 1 ϵ d Δ f 2 ρ d ( t ) .
Define the tightened constants
c ˜ 1 c 1 α 2 2 ( ϵ a + ϵ d ) , c ˜ 2 c 2 1 2 ( ϵ a + ϵ d ) .
Choose ϵ a , ϵ d sufficiently small so that c ˜ 1 > 0 and c ˜ 2 > 0 .
From (32), d d t 1 2 k a P 0 . Combine (44) with (30), which yields
V ˙ c ˜ 1 η 1 2 c ˜ 2 η 2 2 + ρ a + 1 ϵ d Δ f 2 β 1 e 1 2 + ϵ f 2 β 2 Δ f 2 + 1 2 ϵ f β 2 f ¯ 1 2 c ˜ 1 η 1 2 c ˜ 2 η 2 2 β 1 e 1 2 + ρ a + 1 ϵ d + ϵ f 2 β 2 Δ f 2 + 1 2 ϵ f β 2 f ¯ 1 2 .
For t T 0 , Δ f ( t ) Δ ¯ f from the ESO UUB property. Therefore, for all t T 0 ,
V ˙ λ η 1 2 + η 2 2 + e 1 2 + ρ ,
where one can choose
λ min { c ˜ 1 , c ˜ 2 , β 1 } ,
and an explicit ultimate-bound constant
ρ ρ a + 1 ϵ d + ϵ f 2 β 2 Δ ¯ f 2 + 1 2 ϵ f β 2 f ¯ 1 2 = W ¯ a 2 S ¯ 2 2 ϵ a + 1 ϵ d + ϵ f 2 β 2 Δ ¯ f 2 + 1 2 ϵ f β 2 f ¯ 1 2 .
Theorem 1. 
Assume that Assumptions 1, 2, and 4 hold; the controller and observer learning gains are selected such that c ˜ 1 > 0 and c ˜ 2 > 0 in (45). Let λ and ρ be defined in (48) and (49), and let μ 2 λ 1 + α . Then, all closed-loop signals ( η 1 , η 2 , W ^ a , W ^ c , e 1 , e 2 ) remain bounded. Moreover, for all t T 0 , the composite Lyapunov function satisfies the comparison solution (51), and the tracking errors are semi-globally uniformly ultimately bounded with the explicit ultimate bound (52).
Proof. 
For t T 0 , the ESO property yields Δ f ( t ) Δ ¯ f , so the dissipation inequality (47) holds with the constants λ and ρ in (48) and (49). The remaining steps follow by standard comparison arguments, as detailed below.
Using (35), we have
V V e 1 α 2 ( η 1 2 + η 2 2 ) .
Hence, (47) implies the standard comparison inequality
V ˙ 2 λ 1 + α μ V + ρ , t T 0 ,
because V 1 + α 2 ( η 1 2 + η 2 2 ) + 1 2 k a P + V e s o and the negative term in (47) dominates the state part; μ can be conservatively selected as μ = 2 λ 1 + α .
Therefore,
V ( t ) V ( T 0 ) ρ μ e μ ( t T 0 ) + ρ μ , t T 0 ,
and
lim sup t η 1 ( t ) 2 + η 2 ( t ) 2 2 1 α lim sup t V ( t ) 2 1 α · ρ μ = ( 1 + α ) ( 1 α ) · ρ λ .
Hence, the closed-loop tracking errors are SGUUB, and all signals ( η 1 , η 2 , W ^ a , W ^ c , e 1 , e 2 ) remain bounded.    □
Remark 7. 
To avoid confusion with Lyapunov-drift-based single-slot optimisation methods, we emphasise that the Lyapunov function used in this paper is introduced only for the stability analysis of the closed-loop ESO actor–critic system. The long-term control objective is still defined by the infinite-horizon cost in Equation (14), and the actor–critic module performs HJB-inspired online policy refinement for this objective. The Lyapunov function is then used to establish boundedness and the SGUUB of the tracking, observer, and learning dynamics under disturbances and actuator constraints.
Remark 8. 
The proposed ESO learning framework is fundamentally different from generic deep reinforcement learning methods for MEC optimisation [25]. The methods use deep reinforcement learning as the main decision engine for slot-based resource allocation, whereas our method targets continuous-time nonlinear motion control, where ESO provides the stabilising robust baseline and learning serves as a secondary performance-refinement mechanism.
Implementation algorithm for ESO-enhanced actor–critic RL-optimised 3-DOF trajectory tracking is demonstrated below (Algorithm 1).
Algorithm 1. ESO-enhanced actor–critic RL-optimised trajectory tracking (3-DOF).
  1:
Initialise: choose Γ 1 , Γ 2 , k a , k c ; choose ESO bandwidth ω o and set β 1 = 2 ω o , β 2 = ω o 2 ; initialise W ^ a ( 0 ) , W ^ c ( 0 ) , z ^ 1 ( 0 ) , z ^ 2 ( 0 ) .
  2:
for each control step t do
  3:
    Measure ( η , ν ) and compute references ( η d , η ˙ d , η ¨ d ) and ( ν d , ν ˙ d ) .
  4:
    Compute errors: η 1 = η η d , η 2 = R ( ψ ) ν η ˙ d .
  5:
    Set z 1 = η 2 ; integrate ESO (17) and (18) to obtain f ^ = z ^ 2 .
  6:
    Compute basis vector S ( η e ) .
  7:
    Actor output: u ^ * = Γ 1 η 1 Γ 2 η 2 1 2 W ^ a T S ( η e ) .
  8:
    Apply compensated equivalent input: u = u ^ * f ^ .
  9:
    Update critic: integrate W ^ ˙ c = k c S ( η e ) S T ( η e ) W ^ c .
10:
    Update actor: integrate W ^ ˙ a = S ( η e ) S T ( η e ) k a ( W ^ a W ^ c ) + k c W ^ c .
11:
    Compute τ = M 0 u + τ f f ( t ) and allocate thrusters.
12:
end for

4. Simulation Studies

This section provides a reproducible simulation protocol and a theory-aligned ablation study. The uncertainty and the environmental loads are generated continuously by wind–wave–current shaping filters, which is consistent with Assumption 1. To avoid a trivial initial condition, the vessel starts from an offset position that does not lie on the desired circle, and both the start and terminal points are explicitly marked in the trajectory plots. Vessel model parameters are set as Cybership II parameters. The simulations were executed on software stack Matlab R2025a version.
A circular reference trajectory is designed as
x d ( t ) = R cos ( ω t ) ,
y d ( t ) = R sin ( ω t ) ,
ψ d ( t ) = atan 2 ( y ˙ d ( t ) , x ˙ d ( t ) ) ,
with R = 1.5 m and ω = 2 π / 70 rad / s .
Generate environmental disturbance model d ( t ) = [ d x ( t ) , d y ( t ) , d ψ ( t ) ] T by wind–wave–current filters:
d ( t ) = d c ( t ) + d w ( t ) + d w v ( t ) .
Steady current-induced bias is exploited as
d c ( t ) = K c V c 2 cos χ c K c V c 2 sin χ c K c ψ V c 2 .
The wind gust low-frequency filter is chosen as
ξ ˙ w ( t ) = ω w ξ w ( t ) + σ w ω w w ( t ) , w ( t ) N ( 0 , I 3 ) ,
d w ( t ) = diag ( κ w x , κ w y , κ w ψ ) ξ w ( t ) .
Wave-frequency load is employed as a second-order filter. For each l { x , y , ψ } ,
ξ ¨ l ( t ) + 2 ζ l ω l ξ ˙ l ( t ) + ω l 2 ξ l ( t ) = σ l ω l 2 w l ( t ) , w l ( t ) N ( 0 , 1 ) ,
d w v ( t ) = ξ x ( t ) ξ y ( t ) ξ ψ ( t ) .
Disturbance filter parameters are shown in Table 1.
All compared controllers share the same plant, the same reference trajectory, and the same actuator constraints. Thus, differences in performance are attributable to the control law rather than feasibility handling. We compare an ablation study and four methods: (i) nominal feedback (no ESO/RL), u = Γ 1 η 1 Γ 2 η 2 , (ii) ESO-only, u = Γ 1 η 1 Γ 2 η 2 f ^ , (iii) RL-only, u = u ^ * ( η ) with (23) and (24), and f ^ 0 , and (iv) proposed control design. The following performance metrics are reported over t [ 0 , T ] : (i) RMS position error RMS p = 1 T 0 T ( x x d ) 2 + ( y y d ) 2 d t , (ii) RMS yaw error RMS ψ = 1 T 0 T ( ψ ψ d ) 2 d t , (iii) control energy E τ = 0 T τ T τ d t . Controller settings are shown in Table 2.
The ablation results in Table 3 indicate that the dominant robustness improvement in this representative case is provided by the ESO-based disturbance estimation, while the actor–critic component mainly serves as a Lyapunov-guided consistency regularisation and secondary refinement mechanism within the proposed control-oriented framework. Therefore, the numerical margin between ESO only and ESO plus actor–critic may be modest under the reported disturbance realisation, and we avoid interpreting it as a statistically significant gain without repeated-run dispersion measures.
Figure 1 shows the 2-D trajectory-tracking performance of the circular reference. All controllers converge to the vicinity of the desired orbit after a short transient caused by the initial offset. However, the proposed method exhibits the tightest overlap with the reference curve and the smallest visible deviation along the orbit, indicating superior steady-state disturbance rejection under persistent environmental loads. In particular, compared with the ESO-only and RL-only baselines, the proposed controller reduces residual drift and maintains a smaller tracking tube around the reference circle. It shows the overall trajectory-tracking behavior under the same disturbance realisation. All methods converge toward the reference trajectory after the initial offset, while the proposed ESO plus actor–critic scheme exhibits a tighter overlap with the desired path in the steady phase. This visual trend is consistent with the intention of using ESO to handle the dominant disturbance rejection and using the actor–critic coupling to provide a stable refinement around the robust baseline. To highlight the steady tracking regime, the zoomed view in Figure 2 shows that the proposed method achieves the smallest residual error level and mitigates long-term drift more effectively under the same disturbance realisation and actuator constraints. The proposed controller achieves the lowest steady tracking error and the fastest decay after the initial transient. It reports the time evolution of the position tracking error magnitude. The transient part reflects how fast the controller recovers from the initial offset, whereas the steady part reflects the residual tracking tube under persistent disturbances. The proposed method yields a smoother and lower steady error envelope in this representative case, indicating improved disturbance rejection and reduced oscillations in the tracking response. The heading tracking performance is shown in Figure 3. The proposed method maintains a smaller and smoother e ψ ( t ) response, which contributes to the reduced lateral deviation on the circular path and prevents error accumulation caused by heading misalignment. Note that a small negative steady value of the yaw tracking error in Figure 3 indicates a slight signed offset rather than instability, and such a residual bias is consistent with the SGUUB property under persistent disturbances and actuator constraints. Figure 4 plots the three-channel commanded input τ = [ τ x , τ y , τ ψ ] for the proposed controller. The input signals exhibit a larger transient effort to recover from the initial offset, followed by a bounded and smooth steady regime. Importantly, the improved tracking accuracy of the proposed method is not achieved by persistent saturation or excessively aggressive actuation; instead, it results from the complementary compensation structure that combines disturbance estimation and learning-based residual. The control inputs generated by the proposed controller. The signals show a larger but short-lived transient effort to compensate the initial offset, followed by bounded and relatively smooth steady behavior. This indicates that the improved tracking observed in Figure 1, Figure 2 and Figure 3 is not obtained by persistent aggressive actuation, but by the combined effect of ESO disturbance compensation and the consistency regularisation in the actor–critic adaptation. To further quantify the visual result, the position error norm and the RMS index are reported in Figure 5 and Table 3. The proposed method achieves the lowest RMS position error while keeping the commanded inputs. The quantitative results in Table 3 report the tracking metrics for the representative disturbance realisation used in this study and the values are presented to illustrate qualitative performance trends rather than statistical significance. In the reported no-event wind–wave–current case, the proposed ESO+RL controller attains the smallest overall tracking error ( RMS p = 0.0382 m and IAE p = 0.8687 m·s). The proposed method achieves the best tracking performance among the compared methods in this representative case, while the margin over the ESO-only baseline is modest. Figure 5 summarises the tracking performance indicators for the compared methods in this representative disturbance case. The proposed method attains the smallest overall error level among the compared controllers, while the difference between ESO-only and ESO plus actor–critic remains modest. This supports the interpretation that ESO contributes the primary robustness gain, and the actor–critic coupling provides secondary refinement under the reported scenario. Figure 6 reports the evolution of actor and critic weight norm. Compared with RL-only, the proposed architecture yields a markedly smaller and more stationary critic weight norm, indicating that ESO and input-effectiveness adaptation reduce the residual uncertainty seen by the RL layer, thus improving learning stability and accelerating convergence. This shows the evolution of the actor and critic weight norms. The trajectories indicate that the coupled update law provides a bounded and stable parameter adaptation process, which is consistent with the Lyapunov-guided consistency regularisation interpretation of Equations (21) and (22). In this manuscript, we do not interpret these curves as evidence of TD error-driven optimal learning, but as evidence of stable online adaptation within the ESO-based robust control framework. Overall, the results in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 demonstrate that the proposed controller provides superior tracking accuracy under persistent disturbances in a no-event scenario, while maintaining feasible and bounded control inputs under identical actuator constraints.
The results indicate that the ESO module provides the dominant disturbance rejection capability in this scenario, and the actor–critic coupling mainly serves as a Lyapunov-guided consistency regularisation and online refinement component rather than the primary source of robustness. Therefore, the RL-only baseline can be close to the nominal controller and the additional benefit of ESO plus RL over ESO-only may be modest under the reported disturbance realisation.

5. Conclusions

In this study, a coordinate-consistent, ESO-enhanced actor–critic RL control scheme was developed to address the 3-DOF trajectory-tracking problem for marine vessels subjected to complex environmental disturbances. By leveraging an ESO with explicit reproducible disturbance shaping, the framework successfully decouples lumped uncertainties from the control law, thereby significantly reducing the learning burden on the neural network approximators. The proposed actor–critic architecture maintains computational efficiency via a Lyapunov-guided weight-matching consistency regularisation, supporting stable online refinement and real-time implementability; exact HJB optimality and policy optimality are not claimed under this simplified adaptation structure. Stability analysis demonstrates that the closed-loop system achieves SGUUB tracking, with the steady-state error bounds explicitly characterised by the ESO estimation residual. This integration of robust state estimation and adaptive learning provides a rigorous foundation for future enhancements, which will focus on incorporating actuator allocation constraints into the optimisation manifold and integrating formal safety filters to ensure operational resilience in constrained maritime environments.

Author Contributions

Conceptualisation and methodology, software and validation, X.L. and J.L. Both authors contributed equally to the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation Grant No. 2025MSLH070.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
η = [ x , y , ψ ] T R 3 position and yaw in earth-fixed frame
ν = [ u , v , r ] T R 3 body-fixed velocities
τ R 3 generalised control forces/moment
R ( ψ ) R 3 × 3 planar rotation matrix
η d ( t ) , η ˙ d ( t ) , η ¨ d ( t ) desired trajectory and derivatives
η 1 R 3 earth-fixed tracking error, η 1 = η η d
η 2 R 3 earth-fixed strict-feedback velocity error, η 2 = R ( ψ ) ν η ˙ d
u R 3 equivalent second-order control input
f ( χ , t ) R 3 lumped uncertainty in the strict-feedback template
ESO z 1 = η 2 , z 2 = f ; estimates z ^ 1 , z ^ 2 = f ^ ; residual Δ f = f f ^
S ( η e ) R m basis-function vector.
W ^ c ( t ) , W ^ a ( t ) R m × 3 critic/actor weights
k c , k a > 0 critic/actor learning gains
Γ 1 , Γ 2 R 3 × 3 positive definite feedback gains
· F Frobenius

References

  1. Zhu, G.; Wu, C.; Ma, Y.; Hu, S. Resource-constrained adaptive neural output feedback security control for networked USVs under dual-channel malicious attacks. IEEE Trans. Veh. Technol. 2025, 74, 10109–10121. [Google Scholar] [CrossRef]
  2. Liang, X.; Wang, D.; Ge, S.S. Continuous predictive control based on dynamic surface design with application to trajectory tracking. Appl. Ocean. Res. 2021, 111, 102615. [Google Scholar] [CrossRef]
  3. Gao, Q.; Li, J. Adaptive Consensus Control of Multiple Underactuated Marine Surface Vessels with Input Saturation and Severe Uncertainties. Mathematics 2025, 13, 3786. [Google Scholar] [CrossRef]
  4. Zaccone, R. A dynamic programming approach to the collision avoidance of autonomous ships. Mathematics 2024, 12, 1546. [Google Scholar] [CrossRef]
  5. Han, J. From PID to active disturbance rejection control. IEEE Trans. Ind. Electron. 2009, 56, 900–906. [Google Scholar] [CrossRef]
  6. Li, S.; Yang, J.; Iwasaki, M.; Chen, W.H. Hierarchical disturbance/uncertainty estimation and attenuation for integrated modeling and motion control: Overview and perspectives. IEEE/ASME Trans. Mechatron. 2025, 30, 4435–4449. [Google Scholar] [CrossRef]
  7. Chen, W.H.; Yang, J.; Guo, L.; Li, S. Disturbance-observer-based control and related methods—An overview. IEEE Trans. Ind. Electron. 2015, 63, 1083–1095. [Google Scholar] [CrossRef]
  8. Feng, H.; Guo, B.Z. Active disturbance rejection control: Old and new results. Annu. Rev. Control 2017, 44, 238–248. [Google Scholar] [CrossRef]
  9. Gao, Z. Scaling and Bandwidth-Parameterization Based Controller Tuning. Ph.D. Thesis, Cleveland State University, Cleveland, OH, USA, 2003. [Google Scholar]
  10. Gao, Z. Active disturbance rejection control: From an enduring idea to an emerging technology. In Proceedings of the 10th International Workshop on Robot Motion and Control (RoMoCo), Poznan, Poland, 6–8 July 2015; pp. 269–282. [Google Scholar]
  11. Ning, J.; Wang, H.; Hu, X.; Chen, C.P. Event-triggered Adaptive Coordinated Formation Control of Multiple Under-actuated Vehicles with Input and State Quantization. IEEE Trans. Veh. Technol. 2025, 75, 1825–1840. [Google Scholar] [CrossRef]
  12. Liang, X.; Bao, D.; Ge, S.S. Modeling of neuro-fuzzy system with optimization algorithm as a support in system boundary capability online assessment. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 2974–2978. [Google Scholar] [CrossRef]
  13. Shakya, A.K.; Pillai, G.; Chakrabarty, S. Reinforcement learning algorithms: A brief survey. Expert Syst. Appl. 2023, 231, 120495. [Google Scholar] [CrossRef]
  14. Li, B.; Yang, X.; Zhou, R.; Wen, G. Reinforcement learning-based optimised control for a class of second-order nonlinear dynamic systems. Int. J. Syst. Sci. 2022, 53, 3154–3164. [Google Scholar] [CrossRef]
  15. Liang, X.; Wu, J.; Xie, H.; Lu, Y. Preview-Based Optimal Control for Trajectory Tracking of Fully-Actuated Marine Vessels. Mathematics 2024, 12, 3942. [Google Scholar] [CrossRef]
  16. Wen, G.; Niu, B. Optimized tracking control based on reinforcement learning for a class of high-order unknown nonlinear dynamic systems. Inf. Sci. 2022, 606, 368–379. [Google Scholar] [CrossRef]
  17. Qu, J.; Zhang, L.; Lu, Y.; Zhang, W.; Liu, X. A Deep Reinforcement Learning Path-Following Control of Uncertain Under-Actuated Autonomous Marine Vehicle. J. Mar. Sci. Eng. 2023, 11, 1762. [Google Scholar] [CrossRef]
  18. Wei, Z.; Du, J. Reinforcement learning-based trajectory tracking optimal control for underactuated unmanned surface vehicles under asymmetric input saturation. Eng. Appl. Artif. Intell. 2025, 162, 112307. [Google Scholar] [CrossRef]
  19. Abdulghani, A.M.; Abdullah, A.; Rahiman, A.; Abdul Hamid, N.A.W.; Akram, B.O. Dynamic Multi-Objective Controller Placement in SD-WAN: A GMM-MARL Hybrid Framework. Network 2025, 5, 52. [Google Scholar] [CrossRef]
  20. Lamraoui, H.C.; Qidan, Z.; Bouzid, Y. Improved active disturbance rejecter control for trajectory tracking of unmanned surface vessel. Mar. Syst. Ocean. Technol. 2022, 17, 18–26. [Google Scholar] [CrossRef]
  21. Su, Y.; Teng, F.; Li, T.; Chen, C.P. Fixed-time optimal trajectory tracking control for an electric unmanned surface vehicle via reinforcement learning. IEEE/ASME Trans. Mechatron. 2025, 1–12. [Google Scholar] [CrossRef]
  22. Parisi, S.; Tangkaratt, V.; Peters, J.; Khan, M.E. TD-regularized actor-critic methods. Mach. Learn. 2019, 108, 1467–1501. [Google Scholar] [CrossRef]
  23. Chen, H.; Chen, Z.; Liu, A.; Fang, W. Double actor-critic with TD error-driven regularization in reinforcement learning. Neural Netw. 2025, 196, 108323. [Google Scholar] [CrossRef] [PubMed]
  24. Fossen, T.I. Handbook of Marine Craft Hydrodynamics and Motion Control; John Willy & Sons Ltd.: Hoboken, NJ, USA, 2011. [Google Scholar]
  25. Zhang, S.; Tong, X.; Chi, K.; Shi, Z. Jointly Optimizing Task Offloading and Resource Allocation in MEC with Secure Data Transmission: A Multi-DNNs Approach. IEEE Trans. Mob. Comput. 2025, 1–16. [Google Scholar] [CrossRef]
Figure 1. Planar trajectory tracking on a circular reference. The desired circle is shown together with the actual trajectories of all compared methods.
Figure 1. Planar trajectory tracking on a circular reference. The desired circle is shown together with the actual trajectories of all compared methods.
Mathematics 14 00867 g001
Figure 2. Position tracking error magnitude versus time for all methods.
Figure 2. Position tracking error magnitude versus time for all methods.
Mathematics 14 00867 g002
Figure 3. Yaw tracking error ψ ψ d versus time.
Figure 3. Yaw tracking error ψ ψ d versus time.
Mathematics 14 00867 g003
Figure 4. Control inputs τ = [ τ x , τ y , τ ψ ] T .
Figure 4. Control inputs τ = [ τ x , τ y , τ ψ ] T .
Mathematics 14 00867 g004
Figure 5. Comparison of RMS position error.
Figure 5. Comparison of RMS position error.
Mathematics 14 00867 g005
Figure 6. Actor and critic weight norm.
Figure 6. Actor and critic weight norm.
Mathematics 14 00867 g006
Table 1. Disturbance filter parameters.
Table 1. Disturbance filter parameters.
ParameterMeaningValue
V c current magnitude 0.2 m/s
χ c current direction0 rad
K c , K c ψ current-to-load gainschosen to match bias
ω w wind gust bandwidth 0.2 rad/s
σ w wind RMS scaling 1.0
κ w x , κ w y , κ w ψ wind load scaling ( 50 , 80 , 30 )
ω x , ω y , ω ψ wave dominant freq ( 1.0 , 1.0 , 0.8 ) rad/s
ζ x , ζ y , ζ ψ wave damping ratios ( 0.2 , 0.2 , 0.25 )
σ x , σ y , σ ψ wave intensity ( 30 , 40 , 20 )
Table 2. Controller and observer parameters.
Table 2. Controller and observer parameters.
ParameterMeaningValue
Γ 1 feedback gain on η 1 2 I 3
Γ 2 feedback gain on η 2 3 I 3
k a actor learning gain2.0
k c critic learning gain1.2
mnumber of basis functions25
S ( η e ) basis typeGaussian RBF
ω o ESO bandwidth8–15 rad/s
β 1 , β 2 ESO gains β 1 = 2 ω o , β 2 = ω o 2
Table 3. Ablation results under wind–wave–current disturbances.
Table 3. Ablation results under wind–wave–current disturbances.
MethodRMS Pos (m)IAE Pos (m·s)RMS Yaw (rad)
Nominal (No ESO/RL)0.03930.99350.0290
ESO-only0.03830.88070.0273
RL-only0.03930.99290.0290
ESO+RL (Proposed)0.03820.86870.0270
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, X.; Li, J. ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels. Mathematics 2026, 14, 867. https://doi.org/10.3390/math14050867

AMA Style

Liang X, Li J. ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels. Mathematics. 2026; 14(5):867. https://doi.org/10.3390/math14050867

Chicago/Turabian Style

Liang, Xiaoling, and Jiajian Li. 2026. "ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels" Mathematics 14, no. 5: 867. https://doi.org/10.3390/math14050867

APA Style

Liang, X., & Li, J. (2026). ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels. Mathematics, 14(5), 867. https://doi.org/10.3390/math14050867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop